/r/mongodb
News, articles, and interesting stuff in general about MongoDB (unofficial).
Learn MongoDB
/r/mongodb
I see almost everyone using MongoDB with Javascript or other languages use it with Mongoose. When you do that, you are defining strict schema and relationships to ensure inconsistent data does not exist.
But in the hind sight, you are converting your mongoDB into a relaional database already so what really is left when it comes to difference between a RDBMS like Postgresql and MongoDB?
Plus, if I somehow get access to your server or wherever you have your MongoDB running, mongosh/MongoDB Compass into it and start adding wrong data, your database is screwed big time.
Please give me an example use case where you cannot do something with a RDBMS but you could with MongoDB with mongoose on top of it.
I want planning to use free and shared instance for storing some of my data, just want to understand if there are any risks/vulnerabilities like since at the end it is shared server so someone might get hold of the data? (Sorry if it’s a noob question)
Hi everyone. I am currently facing an extremely weird behaviour from one of our test MongoDB cluster. We are currently running some Glue-based data migration pipelines, mapping data from a bunch of CSVs into our MongoDB. Everything seems fine, except for a very strange Int32-type field of one of the collection. The field at first, right after insertion is populated with the correct data from the CSVs. But after one full table read, of any kind (normal query, read from Spark connector, dump collection to CSV,… etc…) all of the values in said field is turned into 0, every single one of them. We are, dumbfounded at first, checked the input CSVs, checked the pipelines, output that field during mapping Glue jobs runs, aggregate the field during the mapping jobs runs, … none gives us any clue of how this is happening. Im writing this in request of the community for this strange problem that we are having, looking for people who has experienced the same thing and just about any hint on what could be the root cause for this.
Hi,
I have deploy mongodb using this link
https://www.mongodb.com/developer/products/mongodb/mongodb-with-kubernetes/
After that i have successfully migrate data from digital ocean into aks mongodb pod like
Now issue is that im using below command as standard connection string to add into Env variable after that it should show data into my product but its not working as i thinking its not connecting to localhost database dont know whether im using correct connection string or not
But if i try from Azure VM inside VM is working fine even using same command mongosh and also from my local laptop its working
database is there inside mongodb all 3 pods
even ping and nslookup and telnet both working like
ping mongodb-0.mongodb-service.mongodb.svc.cluster.local ...replying im getting
mongodb://mongodb-0.mongodb-service.mongodb.svc.cluster.local,mongodb-1.mongodb-service.mongodb.svc.cluster.local,mongodb-2.mongodb-service.mongodb.svc.cluster.local:27017/DATABASE_NAME?authSource=DATABASE_NAME&replicaSet=rs0
Here is the logs of mongodb-0 pod
```
{"client":"10.244.1.67:38838","isSpeculative":false,"isClusterMember":false,"mechanism":"","user":"","db":"","error":"AuthenticationAbandoned: Authentication session abandoned, client has likely disconnected","result":337,"metrics":{"conversation_duration":{"micros":168003080,"summary":{}}},"extraInfo":{}}}
{"t":{"$date":"2024-11-30T07:55:45.805+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.1.67:48518","uuid":{"uuid":{"$uuid":"baea297e-de3f-4e1a-bf5a-b23bbb963ccc"}},"connectionId":648,"connectionCount":15}}
{"t":{"$date":"2024-11-30T07:55:45.805+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn648","msg":"client metadata","attr":{"remote":"10.244.1.67:48518","client":"conn648","negotiatedCompressors":["snappy","zstd","zlib"],"doc":{"driver":{"name":"NetworkInterfaceTL-ReplNetwork","version":"8.0.3"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"24.04"}}}}
{"t":{"$date":"2024-11-30T07:55:47.802+00:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn648","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":1997}}
{"t":{"$date":"2024-11-30T07:56:39.934+00:00"},"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":{"ts_sec":1732953399,"ts_usec":934141,"thread":"1:0x7f053cdc76c0","session_name":"WT_SESSION.checkpoint","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":7,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 227886, snapshot max: 227886 snapshot count: 0, oldest timestamp: (1732953098, 10) , meta checkpoint timestamp: (1732953398, 10) base write gen: 77794"}}}
{"t":{"$date":"2024-11-30T07:56:48.100+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.1.67:56186","uuid":{"uuid":{"$uuid":"c987a09f-4546-4e8a-b126-657a6f3bb5b0"}},"connectionId":649,"connectionCount":16}}
{"t":{"$date":"2024-11-30T07:56:48.100+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn649","msg":"client metadata","attr":{"remote":"10.244.1.67:56186","client":"conn649","negotiatedCompressors":["snappy","zstd","zlib"],"doc":{"driver":{"name":"NetworkInterfaceTL-MirrorMaestro","version":"8.0.3"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"24.04"}}}}
{"t":{"$date":"2024-11-30T07:56:49.803+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn648","msg":"Connection ended","attr":{"remote":"10.244.1.67:48518","uuid":{"uuid":{"$uuid":"baea297e-de3f-4e1a-bf5a-b23bbb963ccc"}},"connectionId":648,"connectionCount":15}}
{"t":{"$date":"2024-11-30T07:56:49.803+00:00"},"s":"I", "c":"ACCESS", "id":5286307, "ctx":"conn648","msg":"Failed to authenticate","attr":{"client":"10.244.1.67:48518","isSpeculative":false,"isClusterMember":false,"mechanism":"","user":"","db":"","error":"AuthenticationAbandoned: Authentication session abandoned, client has likely disconnected","result":337,"metrics":{"conversation_duration":{"micros":63997884,"summary":{}}},"extraInfo":{}}}
"Ending idle connection because the pool meets constraints"
```
here is the env variable how im passing into my product which showing connection string whatever i add into env variable
Note :- We dont want to use username and password or connection string of SRV and we dont want to expose external for whatever user or any outside traffic should not have
that means only we need is local or internet to connect to Database
please help me out
I’m seeking insights into how MongoDB developers can help businesses manage large-scale data efficiently as they grow. As organizations scale, optimizing database performance, scalability, and security becomes crucial. I’m particularly interested in understanding how MongoDB development services can address these challenges, improve data management, and mitigate risks. Examples of how MongoDB developers contribute to long-term growth and operational success would also be valuable.
Transactions are not supported on standalone MongoDB instances, so to enable this feature in Docker Compose, I first have to configure two MongoDB instances as follows:
networks:
maksit-vault-network:
driver: bridge
mongo-arbiter:
container_name: mongo-arbiter
hostname: mongo-arbiter
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
command: >
/bin/bash -c "
if [ ! -f /data/configdb/mongo-keyfile ]; then
echo 'Creating mongo-keyfile...';
openssl rand -base64 756 > /data/configdb/mongo-keyfile;
else
echo 'mongo-keyfile already exists';
fi &&
chmod 400 /data/configdb/mongo-keyfile
volumes:
- ./docker-compose/mongodb/mongo-arbiter:/data/db
- ./docker-compose/mongodb/mongo-keyfile:/data/configdb/mongo-keyfile
networks:
- maksit-vault-network
mongo-rs-1:
container_name: mongo-rs-1
hostname: mongo-rs-1
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
command: >
/bin/bash -c "
if [ ! -f /data/configdb/mongo-keyfile ]; then
echo 'Creating mongo-keyfile...';
openssl rand -base64 756 > /data/configdb/mongo-keyfile;
else
echo 'mongo-keyfile already exists';
fi &&
chmod 400 /data/configdb/mongo-keyfile
volumes:
- ./docker-compose/mongodb/mongo-rs-1:/data/db
- ./docker-compose/mongodb/mongo-keyfile:/data/configdb/mongo-keyfile
ports:
- "27017:27017"
networks:
- maksit-vault-network
then I login to mongo-rs-1
:
docker exec -it mongo-rs-1 mongosh -u admin -p example --authenticationDatabase admin
and execute command to enable replica set myReplSet
:
rs.initiate({
_id: "myReplSet",
members: [
{ _id: 0, host: "mongo-arbiter:27017", arbiterOnly: true },
{ _id: 1, host: "mongo-rs-1:27017" }
]
});
In the end, I have to update the container configurations once again to make these instances start as replica set members:
networks:
maksit-vault-network:
driver: bridge
mongo-arbiter:
container_name: mongo-arbiter
hostname: mongo-arbiter
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
command: >
/bin/bash -c "
if [ ! -f /data/configdb/mongo-keyfile ]; then
echo 'Creating mongo-keyfile...';
openssl rand -base64 756 > /data/configdb/mongo-keyfile;
else
echo 'mongo-keyfile already exists';
fi &&
chmod 400 /data/configdb/mongo-keyfile &&
mongod --replSet myReplSet --bind_ip_all --keyFile /data/configdb/mongo-keyfile --setParameter diagnosticDataCollectionEnabled=false"
volumes:
- ./docker-compose/mongodb/mongo-arbiter:/data/db
- ./docker-compose/mongodb/mongo-keyfile:/data/configdb/mongo-keyfile
networks:
- maksit-vault-network
mongo-rs-1:
container_name: mongo-rs-1
hostname: mongo-rs-1
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
command: >
/bin/bash -c "
if [ ! -f /data/configdb/mongo-keyfile ]; then
echo 'Creating mongo-keyfile...';
openssl rand -base64 756 > /data/configdb/mongo-keyfile;
else
echo 'mongo-keyfile already exists';
fi &&
chmod 400 /data/configdb/mongo-keyfile &&
mongod --replSet myReplSet --bind_ip_all --keyFile /data/configdb/mongo-keyfile"
volumes:
- ./docker-compose/mongodb/mongo-rs-1:/data/db
- ./docker-compose/mongodb/mongo-keyfile:/data/configdb/mongo-keyfile
ports:
- "27017:27017"
networks:
- maksit-vault-network
As you can see, the procedure is complex and not well-suited to boostrap a portable, standalone development environment, as it requires several manual steps, and container restarts...
Some one knows about an easier way to configure a replica set in Docker Compose that is more straightforward than mine?
P.S. this example is taken from my docker-compose.override.yml, so image property is missing.
When exactly does mongodb pause my cluster?
they say after 60 days, so if every 60 days I make a read request to server (GET request to DB from web api) then will it be active?
Hi MongoDB Community,
I am new to mongodb and even kubernetes world. I am learning from the amazing open source community. I need to deploy mongodb latest version(8.0.3) using the operator and have requirement to use sharded replica set. While looking at the samples in operator page, I can only see replica set example.
https://github.com/mongodb/mongodb-kubernetes-operator/tree/master/config/samples
So have couple of questions.
Appreciate the help in any way.
Searching the MongoDB docs seems to say that "expiresAfterSeconds
" can only be used inside "createIndex
".
https://www.mongodb.com/docs/manual/tutorial/expire-data/
But in the tutorial "expiresAfterSeconds
" is used inside "createCollection
":
https://www.mongodb.com/docs/manual/core/timeseries/timeseries-procedures/
If expiresAfterSeconds works inside createCollection, then I won't need to create a separate index for it.
Just to be stingy with memory, is it advisable to replace Redis with Mongo? Redis keeps everything in RAM and can't separately keep some things in RAM while offloading other things onto disk. Mongo seems to be able to offload all things to disk which frees up RAM. Can Mongo be used in place of Redis?
If I want to move my site from a Mongo DB to WordPress with SQL, how hard would that be? I am pretty proficient with WP and coding, but the database stuff is over my head.
I had someone tell me I would need to completely rewrite the PHP for WP and I cannot wrap my head around that? I do need to align Mongo "entitities" with WP tags/categories too. There are roughly 3K posts so I need to figure out how to somewhat automate this transfer!
Can content from a Mongo powered site be migrated to WP in some way? Has anyone done it or have recommendations for someone who can do it? Thank you!
I need to move some MongoDB Atlas collections to a new cluster (still on Atlas). Some of the collections are on the larger size - 100Gb+ in some cases.
What's the most efficient/quick way of handling this? Something that size isn't going to be nice if I need to export to CSV and reimport... :(
I am trying from yesterday but still not able to login.
I tried changing browser, clearing cache, private browsing also vpn still getting this error.
Hi All,
I have an interview for Sr TPM at MongoDB in a few days. Can anyone please share the kind of questions I can expect for same?
Thanks
Hi All, I'm working on creating a web based environment where customers can register an account and it creates their own database, collections for notes, and users (if they want to add more and allow RBAC)
I'm using passport.js , creating and registering users, I have Middleware and all however when I register the user it cannot map the user to a clientID because I don't have any references for it.
So I am asking here how do you guys manage this?
do you have one global database that controls users and maps the clientIDs to that?
do you do it by domain (email domain?)
do you do it another way?
for example my databases are auto generated by udid, which the udid is a client id.
inside that database is the collection users. which has the users inside of it.
Hi all,
I'm starting an app build using Next.js and would like to use MongoDB to store user data and asset references but I have a 'best practices' question regarding the data schema. I understand that data that is accessed together should be stored together but I’m not sure how best to implement that given my use case.
I have two types of user. The first type “creators” can create and save assets (storing asset files on S3 and references in MongoDB), and the second type “consumers” can view all saved assets from all “creators”. Conversely, “consumers” can submit asset requests, so all “creators” should be able to see all asset requests from all “consumers”.
Being new to MongoDB, I’m not sure how best to split this out. Is there any efficiency issues with aggregating data from multiple collections and have a different user schema for each user type? I was considering having a generic User schema with a userType key, and two nested schemas for each user type but obviously this approach will mean latent objects for every single user which probably isn’t good practice. However I would like to aggregate data in as little db computes and blocking calls as possible which was another reason I was considering just one generic User schema for both user types.
I’d love to hear any pointers, or if anyone has built anything similar and how best to structure the schemas. It’ll be greatly appreciated.
I know that Realm device sync is deprecated but end of life is in September 2025. I’m developing an app and it has very tight first release deadlines and it can be redeveloped by the time end of life comes around. Main question is it still working at least?
Hello, I am building an offline first app where the data should be present and changeable even if offline and user can do this from multiple devices. Our backend is using MongoDB and app is using react-native. as I understand Realm is now depreacated. Any options you recommend?
Only constraints are really using mongo and react-native. I looked at few options such as watermelonDB, redux-offline and pouchDB. All of them require a bunch of custom logic for working and I'm afraid of messing up some synchronization logic like consistency and conflics.
what is the most straightforward approach in your opinion?
Hi, I am doing a task where i need to send data from a Java application to a MongoDB database by using a message broker (they adviced RabbitMQ), and I am having lots of trouble in making them work together.
I manage to create MongoDB databases and RabbitMQ queues seperatley, but im not managing to make the data from the queues get inserted into the database.
I have tried loads with chatgpt, but cant get it to work. Anyone have any tips or resources to help?
I can't create database on a recently created cluster. The button "Create database" is continuosly loading and when hover it shows me the message "Data explorer is loading, you will be able to create a database once it has loaded."