Juan Diego Isaza A.Cloudflare R2 vs S3 for VPS hosting: compare egress costs, latency, S3 compatibility, and real-world ops tradeoffs with a practical upload example.
If you’re deciding cloudflare r2 vs s3, the “best” answer depends less on raw durability marketing and more on your VPS hosting pattern: egress-heavy content, multi-region needs, API compatibility, and how much operational complexity you can tolerate.
The headline most people know: Cloudflare R2 doesn’t charge egress fees (to the public internet). S3 does (unless you’re staying inside AWS or using specific optimizations). For VPS hosting, this is a big deal because your compute is typically not inside AWS.
Here’s the practical implication:
Opinionated take: for typical VPS workloads (DigitalOcean/Linode/Hetzner/Vultr boxes serving a public app), egress is the silent killer. R2’s pricing model aligns better with that reality.
S3 is battle-tested and globally available, but it’s still a regional service at its core. R2 is also region-backed, yet it’s designed to sit naturally behind Cloudflare’s edge network.
In VPS hosting, you usually care about two paths:
Typical patterns:
However, don’t assume “edge” automatically fixes everything. If your app frequently does authenticated, uncached reads (private files, per-user objects), your VPS-to-storage latency matters more than user-to-edge latency.
Opinionated take: If you serve mostly public, cacheable assets, R2 + Cloudflare edge is hard to beat. If you do heavy backend reads/writes with strict consistency expectations and mature tooling, S3 still feels like the default.
S3’s biggest advantage isn’t just AWS—it’s the ecosystem:
Cloudflare R2 supports an S3-compatible API, which is excellent, but “compatible” isn’t always “identical.” Most common operations work fine, but you can still hit edge cases:
If you’re on VPS hosting and want portability, S3 compatibility is actually a point for R2: you can write against the S3 API and keep your code relatively portable.
Below is a minimal Python example using boto3 (works with S3 and often with R2 via S3-compatible endpoints). You’ll set your endpoint URL for R2; for AWS S3, you’d omit it.
import boto3
from botocore.client import Config
# For Cloudflare R2: set endpoint_url to your R2 endpoint.
# For AWS S3: remove endpoint_url and use AWS credentials.
s3 = boto3.client(
"s3",
endpoint_url="https://<accountid>.r2.cloudflarestorage.com",
aws_access_key_id="<access_key>",
aws_secret_access_key="<secret_key>",
config=Config(signature_version="s3v4"),
region_name="auto",
)
bucket = "my-bucket"
key = "uploads/report.pdf"
file_path = "/var/www/report.pdf"
s3.upload_file(file_path, bucket, key)
print("Uploaded", key)
This is the VPS-friendly approach: your app runs on a droplet/VM (say, digitalocean or linode) and speaks S3-compatible APIs to external object storage.
When you’re hosting on a VPS, you’re already accepting some ops responsibility. The storage choice should reduce complexity, not add to it.
Choose Cloudflare R2 when:
Choose AWS S3 when:
One more opinionated note: for many VPS-based teams, S3 ends up being “the expensive default” simply because it’s familiar. Familiarity is valuable, but in VPS land, cost model mismatches can hurt more than a small learning curve.
If you’re running a VPS-hosted app and expect substantial public traffic, I’d bias toward Cloudflare R2 first, especially when your stack already leans on Cloudflare at the edge. If you’re building a system that may later move deeper into AWS-managed services—or you already live there—S3 remains the safest long-term ecosystem bet.
For teams deploying on providers like hetzner or digitalocean, it’s worth doing a one-week measurement: log object download volume, cache hit rates, and where requests originate. That traffic shape usually makes the decision obvious without a debate.
Some links in this article are affiliate links. We may earn a commission at no extra cost to you if you make a purchase through them.