We Ditched Docker 25.0 for Podman 5.0: Lessons Learned from 100 Production Container Migrations

# ditched# docker# podman# lessons
We Ditched Docker 25.0 for Podman 5.0: Lessons Learned from 100 Production Container MigrationsANKUSH CHOUDHARY JOHAL

We Ditched Docker 25.0 for Podman 5.0: Lessons Learned from 100 Production Container...

We Ditched Docker 25.0 for Podman 5.0: Lessons Learned from 100 Production Container Migrations

For three years, our SaaS platform relied on Docker 25.0 to run 100+ production containers powering our core microservices, databases, and message queues. While Docker served us well initially, rising security concerns, clunky rootless support, and operational overhead pushed us to evaluate alternatives. After 6 months of testing, we migrated all 100 production containers to Podman 5.0 with zero major outages. Here’s what we learned.

Why We Left Docker 25.0

Docker 25.0’s daemon-based architecture was our biggest pain point. The dockerd daemon runs as root by default, expanding our attack surface, and its rootless mode required complex workarounds that broke frequently. We also faced compatibility issues with Docker’s proprietary tooling, and rising licensing costs for enterprise features made us reconsider our stack.

Podman 5.0 emerged as the top alternative: it’s daemonless, runs rootless by default, offers near-perfect Docker CLI compatibility (we aliased docker to podman on all hosts), and is fully open source with no vendor lock-in.

Our Migration Strategy

We adopted a phased, low-risk approach to avoid disrupting production workloads:

  1. Audit and inventory: We cataloged all 100 containers, including image tags, volume mappings, network configurations, and dependencies (e.g., external databases, message queues).
  2. Dev/staging validation: We migrated non-production environments first, testing every container with Podman 5.0 to identify compatibility gaps.
  3. CI/CD pipeline updates: We replaced all docker commands in our build and deployment pipelines with podman, and updated image registry authentication to use Podman’s native auth file.
  4. Production rollout: We started with stateless microservices (lowest risk), then moved to stateful services like PostgreSQL and Redis, using blue-green deployments to minimize downtime.

Top 6 Lessons Learned

1. CLI Compatibility Is Near-Perfect, But Not 100%

Podman’s Docker compatibility is impressive: 98% of our existing scripts worked without changes. However, we hit edge cases: podman-compose lacks support for a few deprecated Docker Compose features we still used, and some Docker-specific container labels (e.g., com.docker.compose.project) required manual updates. We also had to replace Docker socket (/var/run/docker.sock) mounts with Podman’s API socket for internal tooling that managed containers programmatically.

2. Rootless Mode Requires UID/GID Planning

Podman’s rootless mode maps container UIDs to host subuids/subgids, which caused permission errors for stateful services with mounted volumes. We had to configure /etc/subuid and /etc/subgid for all application users, and adjust volume ownership to match the mapped UIDs. For PostgreSQL, this added 2 hours of troubleshooting per instance, but the security gain (no root-owned volumes) was worth it.

3. Daemonless Architecture Changes Troubleshooting

Without dockerd, we had to rework our logging and monitoring pipelines. Podman logs to journald by default, so we updated our Fluentd agents to collect logs via journalctl -u podman* instead of tailing /var/log/docker.log. We also lost Docker’s built-in health check dashboard, but replaced it with Podman’s podman ps and custom Prometheus exporters.

4. Performance Gains Outweigh Minor Tradeoffs

Podman 5.0 delivered measurable performance improvements: container cold start times were 15% faster than Docker 25.0, and host memory usage dropped 30% with no dockerd daemon running. Image pull times were 5% slower due to differences in layer caching, but we mitigated this by pre-pulling images during off-peak hours.

5. Systemd Integration Simplifies Lifecycle Management

Podman’s native systemd support was a game-changer. We used podman generate systemd --new to create unit files for all containers, enabling automatic start on boot, restart policies, and dependency management between containers. This replaced our custom Docker restart scripts and reduced configuration drift.

6. Blue-Green Deployments Eliminate Downtime

For all 100 production migrations, we used blue-green deployments: we spun up Podman instances alongside existing Docker containers, validated health checks, switched load balancer traffic, then decommissioned Docker hosts. 92% of services had zero downtime; the remaining 8% (stateful databases) had <1 second of downtime during failover.

Unexpected Challenges

We hit two major unplanned issues: first, legacy containers that mounted the Docker socket to manage sibling containers broke, requiring us to rewrite internal orchestration tools to use Podman’s REST API. Second, two proprietary volume plugins we used with Docker had no Podman equivalent, so we migrated to Podman’s built-in local volume driver and NFS for shared storage.

Post-Migration Results

Three months post-migration, we’ve seen:

  • 30% reduction in average host CPU and memory usage
  • Zero security incidents related to container runtimes (down from 2 per quarter with Docker)
  • 40% faster container lifecycle management (start, stop, restart) via systemd
  • $12k annual savings from eliminating Docker enterprise licensing fees

Final Recommendations

If you’re considering migrating from Docker to Podman, we recommend:

  1. Start with non-critical dev/staging environments to identify compatibility gaps early.
  2. Audit all Docker socket mounts and volume plugins before migration.
  3. Test rootless UID/GID mapping with a single stateful service before scaling.
  4. Use Podman’s podman-compose and podman generate systemd to simplify transitions.

Would we do it again? Absolutely. Podman 5.0 has made our container stack more secure, efficient, and easier to manage – and we haven’t looked back at Docker since.