Extend MongoDB Memory Limit
This procedure requires downtime of the application stack
All values in Red are variables and are subject to change as per requirement
Symptoms
The default deployment comes with a standard workload in mind and as the size of dataset grows beyond the limit of standard deployment profile, the mongoDB response becomes noticeably slow. Some of the symptoms are; slow response in data retrieval from historical data, or the currently ongoing operations become very slow.
Some of the indications that MongoDB performance is suffering because of low memory, can be actually seen in logs as well. For Example
serverStatus was very slow:
OR
Out of memory: Kill process 13130 (mongod)
server4 kernel: [1731430.441717] Killed process 13130 (mongod)
Its time to perform a memory upgrade for MongoDB Service.
MongoDB Memory Reservation Algorithm
MondoDB's uses a thumb rule for using memory
- (50% of RAM) - 1 GB
- 256MB
and uses which ever is higher. This is valid for any environment, whether running inside a container or as a standard Host Service, until customised.
Procedure
Calculate the MongoDB current Dataset by using below given procedure.
perform this procedure on the HOST system
# docker exec -it expertflow_mongo_1 sh
Copy and paster the code between "----CODE BLOCK----" lines on the mongo terminal
----CODE BLOCK----
mongo --eval "db.serverStatus();"|\
awk '/maximum bytes configured"/ { MBC=$NF }
/"bytes currently in the cache"/ { BCITC=$NF}
END {if(BCITC > MBC){
printf "Memory Upgrade required %s MB \n", BCITC/1024
} else {
print "no upgrade needed"
}
} '
----CODE BLOCK----
if this above given procedure says "Memory Upgrade required", please note the recommendation figure.
please configure the MongoDB's memory utilisation as per below.
Procedure
Simplified Deployment
- edit the DEPLOYMENT_PATH/sds/docker-compose-core.yml for single node deployment or DEPLOYMENT_PATH/sds/docker-compose-ha-mongo.yml in case HA deployment
- change these parameters to suitable value.this value should be in multiples of 256 like 512,768,1024 and so on.
mem_limit: (X)m – replace X with value
memswap_limit: (X*2)m – replace X with value
- change these parameters to suitable value.this value should be in multiples of 256 like 512,768,1024 and so on.
- After changing the values appropriate to the environment, please restart or just up -d the mongoDB service container.
- DEPLOYMENT_PATH/eftasker service, select corresponding number of service mongo and then select "up"
Manual Deployment
- edit the chat-solution/docker/docker-compose-core.yml and locate the service of mongo
- change these parameters to suitable value.this value should be in multiples of 256 like 512,768,1024 and so on.
mem_limit: (X)m – replace X with value
memswap_limit: (X*2)m – replace X with value
- change these parameters to suitable value.this value should be in multiples of 256 like 512,768,1024 and so on.
- restart the solution
- docker-compose -f chat-solution/docker/docker-compose-core.yml down
- docker-compose -f chat-solution/docker/docker-compose-core.yml up -d