/r/nosql

Photograph via snooOG

News, articles and tools covering alternative database systems.

News, articles and tools covering alternative database systems.

Other subreddits you may like:

Does this sidebar need an addition or correction? Tell me here

/r/nosql

4,674 Subscribers

1

Drastically reducing the cost of MongoDB Atlas clusters with this tool

For the existing users of MongoDB Atlas, you'd agree: it's a brilliant DBaaS, with a major drawback: the auto-scaling sucks in a lot of use cases! It is based on hardware utilization thresholds (CPU & RAM), and could take up to 24 hours before it scales you down in case you have lesser workload, and you end up paying for the expensive hardware until that point!

I made this little tool called ScaleWithBuddha.com, which allows you to specify a schedule for upgrade and downgrade, so you don't have to pay for the expensive tier for 24 hours a day. This works best for apps with predictable workload, and works alongside all other features of MongoDB Atlas as an add-on.

For example, if your app is used heavily from morning till evening on weekdays, the tool allows you to schedule downgrading in the evening and upgrading again in the morning, repeatable on weekdays. This can help you reduce the cost of MongoDB Atlas Clusters by more than 50% in some cases!

If this interests you, do checkout the tool: https://www.scalewithbuddha.com! Running a 10% off with coupon code REDDIT10.

0 Comments
2024/05/12
06:39 UTC

1

Exploring Azure Cosmos DB: A Guide to Scalable NoSQL Database Solutions

🚀 Dive into the future of databases with our latest blog on Azure Cosmos DB! 🌐 Discover how this fully managed NoSQL and relational database service can revolutionize your applications through global scalability, massive throughput, and efficient data partitioning. 🌟

🔗 Learn about the key features:

  • Scalable partitioning (Logical & Physical)
  • Horizontal scaling for high availability
  • Global distribution and multi-master replication

🛠️ Plus, get a step-by-step guide on setting up your own Cosmos DB instance!

Perfect for developers looking to elevate their applications to the next level. Check it out now!
https://erwinschleier.medium.com/exploring-azure-cosmos-db-a-guide-to-scalable-nosql-database-solutions-24c5474f74ca

AzureCosmosDB #NoSQL #DataScalability #CloudComputing #MicrosoftAzure

0 Comments
2024/05/08
14:21 UTC

1

Redis, MongoDB, Cassandra, Neo4J programing tasks

Hello everyone!

I have a few tasks that I need to complete, however I am clueless in python and prefer using R (I do fine, but definitely not the best at understanding it), but do not know where should I begin as programing with databases is different, requires database installation. Is there reliable and easy to understand information so I can complete these tasks using R? The tasks are below for reference.

 

#1 Task: Redis

The program registers video views. For each visited video (with a text identifier), a view is recorded - which user watched it and when. The program must effectively return the number of views of each video. If necessary, return the list of all unique viewers and for each viewer which videos he has watched.

Comment on why specific capabilities are needed to solve parallel data modification problems (why, for example, using a database without such capabilities would not be possible).

Requirements for the task:
a) The program should allow the creation, storage and efficient reading of at least 2 entities (entity - an object existing in the subject area, for example, a car in a car service, a student, a course, a lecture, a teacher in a university). If entities need to be read according to different keys (criteria), the application must provide for efficient reading of such data, assuming that the data may be very large.
b) The task involves modeling a complex data modification problem that would cause data anomalies in a typical key-value database.

 

#2 Task: MongoDB

Model the database by estimating that the data model is documents. Provide the UML diagram of the database model, mark external keys with aggregations, embedded entities with composition relations (alternatively, the embedded entity can be marked with the stereotype <<embedded>>).

The selected field must contain at least 3 entities (for example: universities, student groups, students). Choose a situation so that at least one relationship is external and at least one requires a nested entity.

Comment on your choices for: data types, connections.

Write requests in the program:

  1. To receive embedded entities (for example, a bank - all accounts of all customers). If you use a find operation, use projection and don't send unnecessary data.
  2. At least two aggregating requests (e.g. bank balances of all customers, etc.)
  3. Do not use banking for the database.

 

#3 Task: Cassandra

Provide a physical data model for the Apache Cassandra database (UML). Write a program that implements several operations in the chosen subject area.

Features for the area:

  1. At least some entities exist
  2. There are at least two entities with a one-to-many relationship
  3. Use cases require multiple queries with different parameters for at least one entity.

For example, in a bank, we store customers, their accounts (one-to-many relationship) and credit cards. We want to search for accounts by customer (find all his accounts) and by account number, we want to search for customers by their customer ID or personal code. We want to search for credit cards by their number, and we also want to find the account associated with a specific card.

In at least one situation, make meaningful use of Cassandra's compare-and-set operations (hint: IF) in an INSERT or UPDATE statement. For example, we want to create a new account with a code only if it does not exist. We want to transfer money only if the balance is sufficient.

Cannot use ALLOW FILTERING and indexes that would cause the query to be executed on all nodes (fan out) in queries.

 

#4 Task: Neo4J

Write a simple program implementing scope suitable for graph databases.

  1. Model at least a few entities with properties.
  2. Demonstrate meaningful requests:

2.1. Find entities by attribute (eg find a person by personal identification number, find a bank account by number).
2.2. Find entities by relationship (e.g. bank accounts belonging to a person, bank cards linked to accounts of a specific person).
2.3. Find entities connected by deep connections (eg friends of friends, all roads between Birmingham and London; all buses that can go from stop X to stop Y).
2.4. Finding the shortest path by evaluating the weights (e.g. finding the shortest path between Birmingham and London; finding the cheapest way to convert from currency X to currency Y, when the conversion information of all banks is available and the optimal way can be performed in several steps).
2.5. Aggregate data (e.g. like 2.4, only find path length or conversion cost). Don't take the shortest path.

For simplicity, have test data ready. The program should allow you to make queries (say entering city X, city Y and planning a route between them).

No modeling about movies and cities databases!
Do not print the internal data structures of the Neo4J driver - format the result for the user.

0 Comments
2024/04/20
10:07 UTC

2

Manage a database of 10 billion of data

Hi everyone,

I have a rather unusual project

I have a file containing 10 billion references with a length of 40 letters, to which another reference value of variable length is associated.

I'd like to use an API request to retrieve the value associated with a given reference in record time (ideally less than 0.5 seconds, i know it can be possible in arround 0,30 sec, but i don't know how ..).

Which solution do you think is best suited to this problem ? How to optimize it ?

I'm not basically an SQL specialist, and I wanted to move towards NoSQL, but I didn't really have any ideas on how to optimize it... The aim is to be the fastest without costing €1,000 a month.

The user types in a reference and gets it almost instantly. All he then has to do is enter a reference via the API to retrieve the associated reference.

Many thanks to you

3 Comments
2024/04/04
22:02 UTC

1

Migrating data from Dynamo DB tables into Google Firestore Databases

I am working on a startup and we have decided to stop using AWS and start using Google Firebase. I have several small data tables (NoSQL) in Dynamo DB that I need moved over to the firestore (NoSQL) database. I can easily take the data out of dynamo DB as a Json, but I don't know how to insert that data to Firestore. I need this done yesterday so could really use some help. Thanks!

0 Comments
2024/04/04
16:49 UTC

2

Graph Your World on Windows with Apache AGE

Hey r/nosql crew!

🚀 Big news: Apache AGE's Windows installer is here! Making graph databases a breeze for our Windows-using friends. 🪟💫 Download here

Why You’ll Love It:

  • Easy Install: One-click away from graph power.
  • Open-Source Magic: Dive into graphs with the robustness of PostgreSQL.

Join In:

  • Got a cool graph project? Share it!
  • Questions or tips? Let's hear them!

Let's explore the graph possibilities together!

0 Comments
2024/03/26
20:45 UTC

3

Apache AGE: Graph Meets SQL in PostgreSQL

Hello r/NoSQL community!

I'm thrilled to dive into a topic that bridges the gap between the relational and graph database worlds, something I believe could spark your interest and potentially revolutionize the way you handle data complexities. As someone deeply involved in the development of Apache AGE, an innovative extension for PostgreSQL, I'm here to shed light on how it seamlessly integrates graph database capabilities into your familiar SQL environment.

Why Apache AGE?

Here's the scoop:

  • Seamless Integration: Imagine combining the power of graph databases with the robustness of PostgreSQL. That's what AGE offers, allowing both graph and relational data to coexist harmoniously.
  • Complex Relationships Simplified: Navigate intricate data relationships with ease, all while staying within the comfort and familiarity of SQL. It's about making your data work smarter, not harder.
  • Open-Source Innovation: Join a community that's passionate about pushing the boundaries of database technology. Apache AGE is not just a tool; it's a movement towards more flexible, interconnected data solutions.

Who stands to benefit? Whether you're untangling complex network analyses, optimizing intricate joins, or simply graph-curious, AGE opens up new possibilities for enhancing your projects and workflows.

I'm here for the conversation! Eager to explore how Apache AGE can transform your data landscape? Got burning questions or insights? Let's dive deep into the world of graph databases within PostgreSQL.

For a deep dive into the technical workings, and documentation, and to join our growing community, visit our Apache AGE GitHub and official website.

0 Comments
2024/03/20
21:01 UTC

1

How to explain NoSQL concepts to undergraduate kids with very little or no knowledge of SQL

Same as title

1 Comment
2024/02/29
23:11 UTC

2

Converting sql peer data table data to JSON

I’m having trouble determining the best structure for a peer group database and generating a json import file from sample data in table format. I’m new to MongoDB and coming from an Oracle SQL background. In relational framework, I would setup two tables, one for peer group details and a second for peers. I already have sample data I would like to load into mongo but split out into two different tables. I’ve heard generally I should try and create 1 collection and use embedding, but how would I create that json from my sample tabular data? And longterm, we want to make an api with this peer data where users can lookup by the peer group or by the individual peer. Is an embedded structure still the best structure considering that requirement? Thanks for any info, tips, advice!

0 Comments
2024/02/08
02:28 UTC

4

MongoDB vs DynamoDB vs DocumentDB vs Elastisearch for my usecase

Disclaimer: I don't have any experience with NoSQL

Hi, I'm currently developing a fantasy sports web app, now a game can have many matches and each match can also have many stats results(let's say a match contains at minimum 20 rows of stats results(for both Player A and Player B) that will be stored in the database).

Now that would be a hell of a load being put into my mysql database. So I thought of using nosql, since the structure of results also varies per game type.

Now, I don't really know which to use, and all while considering that we are on budget, so the most cost effective db would be preferred. We are on AWS environment btw.

10 Comments
2024/01/19
13:59 UTC

4

Seeking Guidance: Designing a Data Platform for Efficient Image Annotation, Deep Learning, and Metadata Search

Hello everyone!

Currently, at my company, I am tasked with designing and leading a team to build a data platform to meet the company's needs. I would appreciate your assistance in making design choices.

We have a relatively small dataset of around 50,000 large S3 images, with each image having an average of 12 annotations. This results in approximately 600,000 annotations, each serving as both text metadata and images. Additionally, these 50,000 images are expected to grow to 200,000 in a few years.

Our goal is to train Deep Learning models using these images and establish the capability to search and group them based on their metadata. The plan is to store all images in a data lake (S3) and utilize a database as a metadata layer. We need a database that facilitates the easy addition of new traits/annotations (schema evolution) for images, enabling data scientists and machine learning engineers to seamlessly search and extract data.

How can we best achieve this goal, considering the growth of our dataset and the need for flexible schema evolution in the database for efficient searching and data extraction by our team?

Do you have any resources/blog posts with similar problems and solutions to those described above?

Thank you!

0 Comments
2023/12/28
12:51 UTC

3

MongoDB ReplicaSet Manager for Docker Swarm

I've written this tool out of a need to self-host a MongoDB based application on Docker Swarm, as file-based shared storage of mongodb data does not work - Mongo requires a replicaSet deployment) .

This tool can be used with any docker based application/service that depends on Mongo. It automates the configuration, initiation, monitoring, and management of a MongoDB replica set within a Docker Swarm environment, ensuring continuous operation, and adapting to changes within the Swarm network, to maintain high availability and consistency of data.

If anybody finds this use-case useful and wishes to try it out, here's the repo:

MongoDB-ReplicaSet-Manager

0 Comments
2023/12/06
08:06 UTC

1

Help with the conceptual and logical M of NoSql

Hi, i am a student and my professor left me the job of researching the NoSql key-value model, and one of the things he asks me to do is explain the conceptual and logical model of this Does anyone know where I can find information about this? Sorry for the bad English.

0 Comments
2023/10/11
21:43 UTC

1

I'm studying and I'm stuck and so frustrated

Ok so I'm in a SQL class working on my BA. I'm using db.CollectionName. find() and it just does... nothing. No error no any thing it just goes to the next line. What am I doing wrong?! Edit to add I'm using Mongo 4.2

4 Comments
2023/09/07
23:10 UTC

2

Need help converting a large MongoDB db to MSSQL

Hi I can't go too much into detail but I need to convert a large mongodb database (about 16gb) into a sql database. The idea I have right now is to convert the Mongodb db into a json file and use a python script to push it into MSSQL, I need this to be a script because the job has to occur repeatedly. Does anyone have any other feasible ideas

1 Comment
2023/07/26
15:54 UTC

1

ELI5 nosql

Can someone please help me understand in what use case a nosql database would be better than a traditional rdbms?

I've googled so much but the more I google the more confused I am.

Especially from a website perspective.

Why not use something like MySQL or postgres for the backend?

I know it's quick read and write for nosql but at the cost of data integrity. Why can't you just dump JSON blobs into postgresql?

What benefit do you get from a nosql over something structured?

0 Comments
2023/07/25
01:14 UTC

2

How can I make (game_id, user_id) unique, yet (game_id, score) indexed/clustered, in ScyllaDB?

See this in ScyllaDB/Cassandra:

CREATE TABLE scores_leaderboards (
    game_id int,
    score int,
    user_id bigint,
    PRIMARY KEY (game_id, score, user_id)
) WITH CLUSTERING ORDER BY (score DESC);

The idea is that we can get the user IDs with the top scores for a game.

This means that (game_id, score) needs to be indexed, and that's why I put it like that in the Primary Key.

However, I had to include user_id, so that 2 users can have the exact same score.

The problem is that, like this, (game_id, user_id) isn't unique. I want to make sure the table never contains 2+ pairs of the same (game_id, user_id).

My questions:

  1. What do you suggest I can do, so that (game_id, user_id) is unique, yet (game_id, score) is indexed?

  2. Ideally, (game_id, user_id) would be the primary key, and then I'd create a compound index with (game_id, score).

However, if I try to create a compound index,

CREATE INDEX scores_leaderboards_idx ON scores_leaderboards (game_id, score);

I get the following:

InvalidRequest: Error from server: code=2200 [Invalid query] message="Only CUSTOM indexes support multiple columns"

But I'm not finding how I can create a CUSTOM index... is this an extension I need to install?
Is there any recommendation against using custom indexes?

0 Comments
2023/07/08
20:07 UTC

0

Stateless database connections + extreme simplicity: the future of NoSQL

This is the comparison of how a bank account balance transfer looks like on Redis and LesbianDB

https://preview.redd.it/2b7dbftosz6b1.png?width=1090&format=png&auto=webp&s=8b1a5ddbe0db97846c228a6589db1727d613c982

Notice the huge number of round trips needed to transfer $100 from alice to bob if we use Redis, compared to the 2 round trips used by LesbianDB (assuming that we won CAS). Optimistic cache coherency can reduce this to a single hop for hot keys.

We understand that database tier crashes can easily become catastrophic, unlike application tier crashes, and the database tier have limited scalability compared to the application tier. That's why we kept database tier complexity to an absolute minimum. Most of the fancy things, such as b-tree indexes, can be implemented by the application tier. That's why we implement only a single command: vector compare and swap. With this single command, you can perform atomic reading and conditional writing to multiple keys in 1 query. It can be used to implement atomically consistent reading/writing, and optimistic locking.

Stateless database connections are one of the many ways we make LesbianDB overwhelmingly superior to other databases (e.g Redis). Unlike Redis, LesbianDB database connections are WebSockets based and 100% stateless. This allows the same database connection be used by multiple requests at the same time. Also, stateless database connections and pure optimistic locking are give us much more availability in case of network failures and application tier crashes than stateful pessimistic locking MySQL connections. Everyone knows what happen if the holder of MySQL row locks can't talk to the database. The rows will stay locked until the connection times out or the database is restarted (oh no).

But stateless database connections have 1 inherent drawback: no pessimistic locking! But this is no problem, since we already have optimistic locking. Also, pessimistic locking of remote resources is prohibited by LesbianDB design philosophy.

https://github.com/jessiepathfinder/LesbianDB-v2.1

2 Comments
2023/06/19
15:52 UTC

2

Document store with built in version history?

I’m looking for a no-sql store that includes built-in version history of the docs. Any recommendations?

2 Comments
2023/05/17
06:32 UTC

1

Learning SQL for Data Analysis

My Goal is to transition into data analysis for which I have dedicated 1-2 months learning SQL. Resources that I will be using will be among either of these two courses. I am confused between the two

https://www.learnvern.com/course/sql-for-data-analysis-tutorial

https://codebasics.io/courses/sql-beginner-to-advanced-for-data-professionals

The former is more sort of an academic course that you would expect in a college whereas other is more practical sort of. For those working in the Data domain specially data analyst please suggest which one is closer to everyday work you do at your job and it would be great if you could point out specific section from the courses that can be done especially from the former one as it is a bigger one 25+hr so that best of both the world could be experienced instead studying both individually

Thanks.

0 Comments
2023/05/12
05:02 UTC

Back To Top