Matthew HouThree months ago I started logging every AI tool charge that hit my credit card. The number at the...
Three months ago I started logging every AI tool charge that hit my credit card. The number at the end surprised me — and not in a good way.
Total: $387 over 90 days. $1,548/year.
That's not including my base salary. That's just the tax I'm paying to stay productive as a developer in 2026.
Here's the full breakdown, what was worth it, and what I quietly cancelled.
At peak, I was paying for:
| Tool | Monthly Cost | What I Used It For |
|---|---|---|
| GitHub Copilot | $10 | Autocomplete, inline suggestions |
| Claude Max | $100 | Architecture, complex refactors, debugging |
| Cursor Pro | $20 | IDE integration, multi-file edits |
| Perplexity Pro | $20 | Research, docs lookup |
| v0.dev credits | $20 | UI prototyping |
| Total | $170/month |
For context: I'm a solo developer working on a mix of client projects and a side product. Not a big team with a budget. Just me.
I signed up for everything around the same time because my feed was full of people saying they'd 10x'd their productivity. FOMO is a powerful force.
The first month felt justified. I was shipping faster. Copilot was finishing my boilerplate before I could type it. Claude was helping me architect a tricky event sourcing system I'd been putting off for weeks. Cursor's multi-file edits actually worked the way the demos showed.
I told myself: if I bill even 2 extra hours per month because of these tools, they pay for themselves.
That math worked on paper. Month 1 was a net positive.
By month 2, I noticed something uncomfortable: I wasn't getting faster on new problems. I was getting faster on problems I already knew how to solve.
Copilot is exceptional at boilerplate. Writing a new API endpoint, scaffolding a React component, generating TypeScript types from a schema — it's genuinely 3-4x faster with autocomplete than without.
But the hard problems — designing a clean abstraction, debugging a race condition, figuring out why a migration is mysteriously slow — those still take the same amount of time. Sometimes longer, because I'd ask Claude, get a confident-sounding wrong answer, implement it, and spend an hour debugging the AI's mistake instead of my own.
I also started noticing that Cursor and Copilot were overlapping. I was paying for two autocomplete tools. Cursor was better for multi-file edits; Copilot was better for single-file flow. But I didn't need both.
Cancelled Copilot at the end of month 2.
Month 3 I got honest with myself.
What I actually use every day:
What I use occasionally:
What I was barely using:
After the 90-day experiment, here's my honest assessment:
Tier 1 — Non-negotiable (worth it):
Tier 2 — Situational:
Tier 3 — Nice to have, easy to skip:
Here's what nobody talks about in the "AI will 10x your productivity" posts:
If AI tools cost you $1,500/year and make you 10% more productive on a $100k salary, that's $10,000 in productivity gained. Clear win.
But if they make you 10% more productive on work that was 60% boilerplate anyway... you're paying $1,500 to go slightly faster on the easy stuff. The hard problems still take the same amount of thinking.
The productivity gains from AI are real but uneven. They're biggest on:
They're smallest on:
Be honest with yourself about what your actual work looks like before assuming you'll get full productivity gains.
After the 90-day experiment, I'm down to $120/month:
Everything else is free tier or cancelled.
The key insight: specialization beats breadth. Two tools you use deeply beat five tools you use shallowly. And every subscription you're not using is just noise that clutters your decision-making.
Audit your AI stack. You might be surprised what you're paying for.
If you're building products as a developer and tracking your actual tool costs, I'd be curious what your stack looks like. Drop it in the comments.