I Tracked Every Dollar I Spent on AI Coding Tools for 90 Days. Here's What I Found.

# ai# productivity# webdev# career
I Tracked Every Dollar I Spent on AI Coding Tools for 90 Days. Here's What I Found.Matthew Hou

Three months ago I started logging every AI tool charge that hit my credit card. The number at the...

Three months ago I started logging every AI tool charge that hit my credit card. The number at the end surprised me — and not in a good way.

Total: $387 over 90 days. $1,548/year.

That's not including my base salary. That's just the tax I'm paying to stay productive as a developer in 2026.

Here's the full breakdown, what was worth it, and what I quietly cancelled.


The Full Stack (What I Was Running)

At peak, I was paying for:

Tool Monthly Cost What I Used It For
GitHub Copilot $10 Autocomplete, inline suggestions
Claude Max $100 Architecture, complex refactors, debugging
Cursor Pro $20 IDE integration, multi-file edits
Perplexity Pro $20 Research, docs lookup
v0.dev credits $20 UI prototyping
Total $170/month

For context: I'm a solo developer working on a mix of client projects and a side product. Not a big team with a budget. Just me.


Month 1: The Honeymoon Phase

I signed up for everything around the same time because my feed was full of people saying they'd 10x'd their productivity. FOMO is a powerful force.

The first month felt justified. I was shipping faster. Copilot was finishing my boilerplate before I could type it. Claude was helping me architect a tricky event sourcing system I'd been putting off for weeks. Cursor's multi-file edits actually worked the way the demos showed.

I told myself: if I bill even 2 extra hours per month because of these tools, they pay for themselves.

That math worked on paper. Month 1 was a net positive.


Month 2: The Cracks Start Showing

By month 2, I noticed something uncomfortable: I wasn't getting faster on new problems. I was getting faster on problems I already knew how to solve.

Copilot is exceptional at boilerplate. Writing a new API endpoint, scaffolding a React component, generating TypeScript types from a schema — it's genuinely 3-4x faster with autocomplete than without.

But the hard problems — designing a clean abstraction, debugging a race condition, figuring out why a migration is mysteriously slow — those still take the same amount of time. Sometimes longer, because I'd ask Claude, get a confident-sounding wrong answer, implement it, and spend an hour debugging the AI's mistake instead of my own.

I also started noticing that Cursor and Copilot were overlapping. I was paying for two autocomplete tools. Cursor was better for multi-file edits; Copilot was better for single-file flow. But I didn't need both.

Cancelled Copilot at the end of month 2.


Month 3: Rationalization

Month 3 I got honest with myself.

What I actually use every day:

  • Claude: Multiple times daily. Architecture, code review, explaining weird error messages, writing documentation I'd otherwise procrastinate on. Worth every dollar of the $100.
  • Cursor: 8+ hours a day. It's my IDE. The multi-file edit workflow for refactors is genuinely irreplaceable.

What I use occasionally:

  • Perplexity: Maybe 3-4 times a week for quick research. The $20 Pro tier vs the free tier difference is... marginal. I can get 80% of the value free. Downgraded to free.

What I was barely using:

  • v0.dev: I prototyped 2 UIs in 3 months. Cool tool, not worth $20/month for my workflow. Cancelled, will use credits if needed.

The Reckoning: What's Actually Worth It

After the 90-day experiment, here's my honest assessment:

Tier 1 — Non-negotiable (worth it):

  • Claude Max ($100): The reasoning quality at this tier is meaningfully better than the free tier. For complex problems, the jump from "correct 60% of the time" to "correct 85% of the time" is enormous. The time saved debugging AI hallucinations alone pays for it.
  • Cursor Pro ($20): If you're in it all day, $20 is nothing. The multi-file edits changed how I think about refactoring.

Tier 2 — Situational:

  • GitHub Copilot ($10): Good if you're not using Cursor. If you are, redundant.
  • Perplexity Pro ($20): Worth it if you do a lot of research. Not if you mostly code.

Tier 3 — Nice to have, easy to skip:

  • v0/similar UI tools: Use the free tier. Upgrade for specific projects, then cancel.

The Uncomfortable Math

Here's what nobody talks about in the "AI will 10x your productivity" posts:

If AI tools cost you $1,500/year and make you 10% more productive on a $100k salary, that's $10,000 in productivity gained. Clear win.

But if they make you 10% more productive on work that was 60% boilerplate anyway... you're paying $1,500 to go slightly faster on the easy stuff. The hard problems still take the same amount of thinking.

The productivity gains from AI are real but uneven. They're biggest on:

  • Boilerplate generation
  • Documentation
  • Translating between languages/formats
  • Explaining unfamiliar code
  • First drafts of anything

They're smallest on:

  • Novel problem solving
  • System design
  • Debugging subtle bugs
  • Any domain where you can't verify the output easily

Be honest with yourself about what your actual work looks like before assuming you'll get full productivity gains.


What I'm Running Now

After the 90-day experiment, I'm down to $120/month:

  • Claude Max: $100
  • Cursor Pro: $20

Everything else is free tier or cancelled.

The key insight: specialization beats breadth. Two tools you use deeply beat five tools you use shallowly. And every subscription you're not using is just noise that clutters your decision-making.

Audit your AI stack. You might be surprised what you're paying for.


If you're building products as a developer and tracking your actual tool costs, I'd be curious what your stack looks like. Drop it in the comments.