Mongodb TTL Index not expiring documents from collection

I have TTL index in collection fct_in_ussd as following

db.fct_in_ussd.createIndex(     {"xdr_date":1},     { "background": true, "expireAfterSeconds": 259200} )   {     "v" : 2,     "key" : {         "xdr_date" : 1     },     "name" : "xdr_date_1",     "ns" : "appdb.fct_in_ussd",     "background" : true,     "expireAfterSeconds" : 259200 } 

with expiry of 3 days. Sample document in collection is as following

{     "_id" : ObjectId("5f4808c9b32ewa2f8escb16b"),     "edr_seq_num" : "2043019_10405",     "served_imsi" : "",     "ussd_action_code" : "1",     "event_start_time" : ISODate("2020-08-27T19:06:51Z"),     "event_start_time_slot_key" : ISODate("2020-08-27T18:30:00Z"),     "basic_service_key" : "TopSim",     "rate_event_type" : "",     "event_type_key" : "22",     "event_dir_key" : "-99",     "srv_type_key" : "2",     "population_time" : ISODate("2020-08-27T19:26:00Z"),     "xdr_date" : ISODate("2020-08-27T19:06:51Z"),     "event_date" : "20200827" } 

Problem Statement :- Documents are not getting removed from collection. Collection still contains 15 days old documents.

MongoDB server version: 4.2.3

Block compression strategy is zstd

storage.wiredTiger.collectionConfig.blockCompressor: zstd

Column xdr_date is also part of another compound index.

What’s the best way to encrypt and store text in a MongoDB database?

I have a "cloud service", which consists of 2 parts:

  • Web application, written in Next.js;
  • MongoDB database (uses MongoDB Atlas).

I allow users to sign in with GitHub and handle authentication using JWT. User can create & delete text files, which are saved in the database as so:

{     "name": string,     "content": string,     "owner": number    <-- User ID } 

I would like to encrypt the content so that I can’t see it in the database. I was thinking about using the Web Crypto API, but I’m not sure how I’m going to store the encryptions/decryption key securely.

What’s the best way to handle this case and which encryption algorithm should I use?

Best Way to loop through huge mongodb collection every minute?

I need to loop through mongodb collection EVERY MINUTE which will be huge in future, I have to loop through each object and then use some if else conditions and then perform an http post request inside IF condition. I am planning to use https://www.npmjs.com/package/node-cron this library for the purpose that will run a task every minute. In this task, i will get a mongodb collection, loop through all the objects using ForEach loop and then do some actions using if else conditions. Is this approach okay or there is better/efficient way?

Inserting many documents in collection, but only unique elements, based on specific field. MongoDB

I cannot seem to find an answer on this anywhere. I need the following:

Given an array of objects with the structure:

{    link: 'some-link',    rating: 25,    otherFields: '..',    ... }  I want to insert them into my collection. So I would just do insertMany... But I only want to insert those elements of the array that are unique, meaning that I do not want to insert objects with the field "link" being something that is already in my collection.  Meaning if my collection has the following documents:  {   _id: 'aldnsajsndasd',   link: 'bob',   rating: 34, } {    _id: 'annn',    link: 'ann',    rating: 45 } 

And I do the “update/insert” with the following array:

[{   link: 'joe',    rating: 10 },{   link: 'ann',   rating: 45 }, {   link: 'bob',   rating: 34 }, {   link: 'frank',   rating: 100 }] 

Only documents:

{   link: 'frank',   rating: 100 } {    link: 'joe',   rating: 10 } 

Would be inserted in my collection

MongoDB replica set

I am trying to set a MongoDB replica set.

The idea is, having 4 instances in aws: 1 – Nodejs app, a simple webpage, configured to connect to a DB_HOST on port 27017 2 – MongoDB Primary 3 – MongoDB replica 4 – MongoDB replica

My NodeJs app works without any problem if i access its IP in my browser, i can see the homepage, but if i try to put :27017/posts, it keeps loading and than gives the error.

This is how i configured the project with terraform and bash scripts, as i need npm to run the app.

mongodb conf:

# mongod.conf  # for documentation of all options, see: #   http://docs.mongodb.org/manual/reference/configuration-options/  # Where and how to store data. storage:   dbPath: /var/lib/mongodb   journal:     enabled: true #  engine: #  mmapv1: #  wiredTiger:  # where to write logging data. systemLog:   destination: file   logAppend: true   path: /var/log/mongodb/mongod.log  # network interfaces net:   port: <%= @port %>   bindIp: <%= @bindIp %>   #processManagement:  #security:  #operationProfiling:  #replication: replication:   replSetName: rs0  #sharding:  ## Enterprise-Only Options:  #auditLog:  #snmp: 

def variables:

default['mongodb']['port'] = 27017 default['mongodb']['bindIp'] = '0.0.0.0 

this is the script to configure IP for mongodb primary and secondaries:

#!/bin/bash  sudo echo '10.0.1.100 rs0' >> /etc/hosts  sudo systemctl enable mongod sudo systemctl start mongod  mongo mongodb://10.0.1.100 --eval "rs.initiate( { _id : 'rs0', members: [{ _id: 0, host: '10.0.1.100:27017' }]})" mongo mongodb://10.0.1.100 --eval "rs.add( '10.0.2.100:27017' )" mongo mongodb://10.0.1.100 --eval "rs.add( '10.0.3.100:27017' )" mongo mongodb://10.0.1.100 --eval "db.isMaster().primary" mongo mongodb://10.0.1.100 --eval "rs.slaveOk()"   sleep 60; sudo systemctl restart metricbeat sudo systemctl restart filebeat  sleep 180; sudo filebeat setup -e \   -E output.logstash.enabled=false \   -E output.elasticsearch.hosts=['10.0.105.100:9200'] \   -E setup.kibana.host=10.0.105.101:5601 && sudo metricbeat setup 

As the nodejs app need a DB_HOST variable to start seeding the mongodb, i had to set this script.

#!/bin/bash sleep 120 cd /home/ubuntu/AppFolder/app  export DB_HOST=mongodb://10.0.1.100:27017,10.0.2.100:27017,10.0.3.100:27017?replicaSet=rs0   node /home/ubuntu/AppFolder/app/seeds/seed.js  sudo npm install sudo npm start &  sudo filebeat modules enable nginx 

Once i spin up my instances with terraform, and log inside the Database primary or any of them, and try a simple command as rs.status() nothing happens, the console waits for me to imput something else. and in the mongodb.log i got this REPL MESSAGE

2020-05-10T01:56:27.135+0000 I REPL     [initandlisten] Did not find local voted for document at startup. 2020-05-10T01:56:27.135+0000 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument: Did not find replica set configuration document in local.system.replset 

i can see that my configuration has been reconized and it does show in the log rs0 but right after i get the above message. And this happens on all 3 mongodb instance. (Mongodb service is up and running).

On the other hand, if i ssh inside my APP instance, in the logs i get this

 DeprecationWarning: `openSet()` is deprecated in mongoose >= 4.11.0, use `openUri()` instead, or set the `useMongoClient` option if using `connect()` or `createConnection()`. See http://mongoosejs.com/docs/4.x/docs/connections.html#use-mongo-client  /home/ubuntu/AppFolder/app/node_modules/mongodb/lib/replset.js:365           process.nextTick(function() { throw err; })                                         ^ MongoError: no primary found in replicaset or invalid replica set name     at /home/ubuntu/AppFolder/app/node_modules/mongodb-core/lib/topologies/replset.js:560:28     at Server.<anonymous> (/home/ubuntu/AppFolder/app/node_modules/mongodb-core/lib/topologies/replset.js:312:24)     at Object.onceWrapper (events.js:315:30)     at emitOne (events.js:116:13)     at Server.emit (events.js:211:7)     at /home/ubuntu/AppFolder/app/node_modules/mongodb-core/lib/topologies/server.js:300:14     at /home/ubuntu/AppFolder/app/node_modules/mongodb-core/lib/connection/pool.js:469:18     at _combinedTickCallback (internal/process/next_tick.js:132:7)     at process._tickCallback (internal/process/next_tick.js:181:9) sudo: unable to resolve host ip-10-0-103-40 npm WARN lifecycle solution-code@1.0.0~postinstall: cannot run in wd solution-code@1.0.0 node seeds/seed.js (wd=/home/ubuntu/AppFolder/app) npm WARN solution-code@1.0.0 No repository field. 

Sorry for the long post i just wanted to be sure to share all the required info and thank you in advance for your help.

Restroring from MongoDB Atlas (4.2.5) to DocumentDB (3.6) do no succeed due to error “Unsupported BSON : type Decimal128”

MongoDB Atlas (4.2.5) to DocumentDB (3.6)

Below is the command I used for mongodump on my MongoDB shell. I had received a warning, but it was successfully completed.

C:\MongoDB\bin>mongodump --host xxxxxx.mongodb.net:27017,xxxxxxx.mongodb.net:27017,xxxxxxmongodb.net:27017 --ssl --username Mdocdb --password xxxxx --authenticationDatabase admin --db sample_airbnb 2020-04-14T03:18:49.832+0530    WARNING: ignoring unsupported URI parameter 'replicaset' 2020-04-14T03:18:53.429+0530    writing sample_airbnb.listingsAndReviews to 2020-04-14T03:18:55.832+0530    [........................]  sample_airbnb.listingsAndReviews  0/5555  (0.0%) 2020-04-14T03:18:58.832+0530    [........................]  sample_airbnb.listingsAndReviews  0/5555  (0.0%) . . 2020-04-14T03:26:34.177+0530    [########################]  sample_airbnb.listingsAndReviews  5555/5555  (100.0%) 2020-04-14T03:26:34.204+0530    done dumping sample_airbnb.listingsAndReviews (5555 documents) 

On AWS Ec2 instance, i issue the below command to restore the database, but it errrors out.

[ec2-user@xxxxx sample_airbnb]$   ls -lrt total 92156 -rw-rw-r-- 1 ec2-user ec2-user      738 Apr 13 21:48 listingsAndReviews.metadata.json -rw-rw-r-- 1 ec2-user ec2-user 94362191 Apr 13 21:56 listingsAndReviews.bson  [ec2-xxx@xxxxxx ~]$   mongorestore --db airbnb sample_airbnb/ --ssl --host first-docdbxxxxxxxxus-east-2.xxx.amazonaws.com:27017 --sslCAFile rdsxxxxbundle.pem --username systemuser --password xxxxxx 2020-04-14T07:41:42.330+0000    checking for collection data in sample_airbnb/listingsAndReviews.bson 2020-04-14T07:41:42.333+0000    reading metadata for airbnb.listingsAndReviews from sample_airbnb/listingsAndReviews.metadata.json 2020-04-14T07:41:42.333+0000    restoring airbnb.listingsAndReviews from sample_airbnb/listingsAndReviews.bson 2020-04-14T07:41:42.771+0000    error: Unsupported BSON : type Decimal128 2020-04-14T07:41:43.100+0000    error: Unsupported BSON : type Decimal128 2020-04-14T07:41:43.195+0000    error: Unsupported BSON : type Decimal128 2020-04-14T07:41:43.257+0000    error: Unsupported BSON : type Decimal128 2020-04-14T07:41:43.261+0000    error: Unsupported BSON : type Decimal128 2020-04-14T07:41:43.355+0000    error: Unsupported BSON : type Decimal128 2020-04-14T07:41:43.356+0000    error: Unsupported BSON : type Decimal128 2020-04-14T07:41:43.426+0000    error: Unsupported BSON : type Decimal128 2020-04-14T07:41:43.467+0000    error: Unsupported BSON : type Decimal128 2020-04-14T07:41:43.469+0000    error: Unsupported BSON : type Decimal128 2020-04-14T07:41:43.469+0000    restoring indexes for collection airbnb.listingsAndReviews from metadata 2020-04-14T07:41:43.470+0000    Failed: airbnb.listingsAndReviews: error creating indexes for airbnb.listingsAndReviews: createIndex error: Field '2dsphereIndexVersion' is currently not supported  

Since the index was failing, i avoided the index restore using the --noIndexRestore key. The restore completed, although the previous Decimal error remained.

. . . 2020-04-14T07:42:01.284+0000    error: Unsupported BSON : type Decimal128 2020-04-14T07:42:01.284+0000    no indexes to restore 2020-04-14T07:42:01.284+0000    finished restoring airbnb.listingsAndReviews (5555 documents) 2020-04-14T07:42:01.284+0000    done 

However, there was no data under the collection of the restored database. What could be wrong?

PS: I was able to import successfully to my local machine MongoDB (4.2.5) successfully. So this got to be something to do with the version compatibility ?

on my local db:-

C:\MongoDB\bin>mongorestore --db sample_airbnb dump\sample_airbnb\listingsAndReviews.bson 2020-04-14T13:08:42.003+0530    checking for collection data in dump\sample_airbnb\listingsAndReviews.bson 2020-04-14T13:08:42.006+0530    reading metadata for sample_airbnb.listingsAndReviews from dump\sample_airbnb\listingsAndReviews.metadata.json 2020-04-14T13:08:42.066+0530    restoring sample_airbnb.listingsAndReviews from dump\sample_airbnb\listingsAndReviews.bson 2020-04-14T13:08:43.057+0530    restoring indexes for collection sample_airbnb.listingsAndReviews from metadata 2020-04-14T13:08:43.451+0530    finished restoring sample_airbnb.listingsAndReviews (2874 documents, 0 failures) 2020-04-14T13:08:43.451+0530    2874 document(s) restored successfully. 0 document(s) failed to restore. 

Mongodb automatic Replica Set creation with JS driver

The MongoDB docs state that one can use the mongo client console to initialize replica sets by running rs.initiate([config]). However, I don’t want to have to manually use the command line for this. There is a driver method called Admin.command in the docs but I don’t know how that would work here. How do I automate configuring replica sets using the MongoDB JS driver?

I have a js script that initializes my database installation by creating the data folder and automatically setting up users for access control. I want it to also configure the replica set so the database is completely ready to use after running it. I’m using MongoDB 4.2.1 and driver version 3.3.

MongoDB conditional set


I’m using MongoDB + Mongoose + Node.JS

App logic: We have a game similar to minesweeper. User selected cells in game field store in fields: Array, and bombs locations store in bombs: Array.

When user performing a request to select a cell in game, server must set fields[cell] = 1 // 1 means cell is selected. If the bomb in cell bombs[cell] === 1 // 1 means bomb in cell we need to make different update, like fields[cell] = 1; winning = 0. How to realize it in one query?

I tried to make a query, wich change only field[cell] = 1, but don’t change winning = 0 if inside field detected bomb.

Code:

const selection = await db.Minesweeper.findOneAndUpdate({     uid,     winning: null, }, {     $  set: { [`fields.$  {cell}`]: 1, },     // I need to set winning = 0, if [bombs.$  {cell}] = 1 }); 

Game schema:

const MinesweeperSchema = new mongoose.Schema({     uid: { type: Number, required: true },     bet: { type: Number, required: true },     winning: { type: Number, default: null },     fields: { type: Array, default: new Array(5*5).fill(0) },     bombs: { type: Array, required: true }, }, { timestamps: true }); 

Well, again:

I need that logic: If selected cell contains a bomb, set field[cell_index] = 1; winning = 0, but if cell don’t contains a bomb, set field[cell_index] = 1