YashDocker Out of Memory: How to Diagnose and Fix OOM Kills Your container keeps dying and you...
Your container keeps dying and you don't know why. No error in the app. No crash message. Just — gone.
Nine times out of ten, the kernel killed it for using too much memory.
Here's how to know for sure and fix it.
# Check the container's last exit code
docker inspect <container_name> | grep -A 5 '"State"'
An OOM kill shows "OOMKilled": true:
"State": {
"Status": "exited",
"Running": false,
"OOMKilled": true,
"ExitCode": 137
}
Exit code 137 = killed by signal. Combined with OOMKilled: true = out of memory.
Also check kernel logs:
sudo dmesg | grep -i "oom\|killed process" | tail -20
You'll see something like:
Out of memory: Kill process 12345 (node) score 847 or sacrifice child
Killed process 12345 (node) total-vm:2048000kB, anon-rss:1834000kB
# Live container stats
docker stats --no-stream
# Check if a memory limit is set
docker inspect <container_name> | grep -i memory
If MemoryLimit is 0, there's no limit set — the container can eat all available RAM.
# Run with limit
docker run -m 512m your-image
# Or update a running container
docker update --memory 512m --memory-swap 512m <container_name>
Or in docker-compose.yml:
services:
app:
image: your-image
deploy:
resources:
limits:
memory: 512M
# Get a shell in the container
docker exec -it <container_name> sh
# Check process memory inside container
top
# or
cat /proc/meminfo
For Node.js apps specifically:
# Check heap usage
node --max-old-space-size=256 server.js
# Add memory monitoring
node -e "setInterval(() => console.log(process.memoryUsage()), 5000)" &
docker run --restart=unless-stopped -m 512m your-image
This doesn't fix the leak but keeps the service alive while you investigate.
--memory limits in productiondocker stats in a loop: watch -n 5 docker stats --no-stream
I built ARIA to solve exactly this.
Try it free at step2dev.com — no credit card needed.