My Actual AI Workflow: What I Use Every Day (and What I Stopped Using)

# ai# productivity# webdev# tools
My Actual AI Workflow: What I Use Every Day (and What I Stopped Using)Matthew Hou

There are two types of "AI tools for developers" posts: the kind written by someone who tried tools...

There are two types of "AI tools for developers" posts: the kind written by someone who tried tools for a week and made a YouTube video, and the kind written by someone who's been using them for 18 months and has actual opinions.

This is the second kind.

I'm a solo developer — mostly backend work, some full-stack, occasional infrastructure. Here's exactly how AI fits into my actual daily workflow, what it's genuinely replaced, and where it still fails me.


The Tools I Use Every Single Day

Claude (claude.ai)

Daily usage: 15-25 conversations

I'm not being paid to say this — Claude is the AI I reach for most. The main reasons:

Architecture conversations. I'll describe a system problem — "I need to sync state between three services with eventual consistency, here are the constraints" — and get a reasoned analysis of tradeoffs that's actually useful. Not a tutorial on what eventual consistency is. A nuanced discussion of my specific situation.

Code review. I paste a diff or a function and ask "what am I missing?" It reliably finds the edge cases I glossed over, suggests error handling I forgot, and sometimes flags something subtle I would have shipped.

Explaining unfamiliar code. I inherited a codebase with some particularly exotic use of Rust's borrow checker. Having Claude explain what each section does, and why, was faster than any docs or StackOverflow search.

The failure mode: confident wrongness on niche topics. If I ask about something obscure — a specific Kubernetes operator, a less-popular database behavior — Claude will sometimes generate plausible-sounding answers that are just wrong. I've learned to verify anything outside mainstream knowledge. This isn't unique to Claude but it's the main way it wastes my time.


Cursor

Daily usage: 8+ hours (it's my IDE)

The hot take that turns out to be true: Cursor's value isn't the AI tab completion. It's the multi-file editing.

When I'm refactoring, I'll describe what I want to change in natural language — "rename this type everywhere, update the tests, update the documentation comments" — and watch it propagate changes across a dozen files. Reviewing the diff takes less time than making the changes manually would.

The part nobody talks about: the quality of your prompts matters as much as the tool. Cursor with vague instructions produces mediocre code. Cursor with specific, well-scoped instructions produces good code. The developers getting the most from it are the ones who know exactly what they want and can articulate it clearly. The tool amplifies precision.


GitHub Actions + AI-Written Workflow Files

Less glamorous than the other tools, but genuinely useful: I use Claude to write first drafts of CI/CD workflow files.

GitHub Actions syntax is one of those things I can never fully remember — specifically the trigger conditions, caching syntax, and multi-job dependencies. I describe what I need in plain English, get a workflow file, review and edit it. Saves 30-45 minutes every time I set up a new pipeline.


Tools I Use for Specific Tasks

Perplexity

Use case: research with sources

When I need to quickly understand something I don't know — a technology decision another team made, context on a library before using it, understanding a security CVE — Perplexity's cited answers are faster than a Google search through SEO noise.

The difference from Claude: Perplexity will tell me things have changed recently. Claude doesn't always flag that its knowledge might be outdated.

I don't use the Pro tier. The free tier handles my use case.


v0.dev / Lovable (situationally)

For rapidly prototyping a UI I don't care about deeply, these tools are useful. I describe what I want visually, get a React component, copy out the parts I need.

The caveat: the code quality is optimized for "looks good" not "is maintainable." I treat the output as a starting point, not something I'd ship directly. But for wireframes-that-work or quick internal tools, they cut hours off the process.


What I've Stopped Using

AI Git Commit Messages (daily AI summary tools)

I tried a few tools that auto-generate commit messages. They were consistently shallow — they describe what changed but not why, which is the only part that matters for a commit message. Turns out writing a good commit message requires understanding the intent behind a change, and that intent lives in my head. Dropped these after two weeks.

AI-Generated Tests (as a primary workflow)

The tests AI generates tend to be thorough on the happy path and useless at finding real bugs. They test the code that was written, not the behavior that should exist. I still use AI to write test scaffolding (describing the fixture setup, generating mock data structures), but the actual test logic I write myself. Tests are how I think through edge cases — outsourcing that thinking defeats the purpose.

Multiple LLM Subscriptions

I spent a month running Claude, ChatGPT Plus, and Gemini Pro in parallel, switching between them to "use the best tool for each task." In practice: I defaulted to Claude for almost everything and felt guilty about the subscriptions I wasn't using. Cancelled everything except Claude. Simpler, cheaper, same output.


The Honest Assessment

AI tools have changed what a productive day looks like for me. They've mostly compressed the time between "I know what I want" and "I have a first draft I can work from."

They have not:

  • Made me better at knowing what to build
  • Replaced the thinking required for hard technical decisions
  • Made debugging meaningfully faster when the bug is subtle
  • Helped me ship things I didn't understand how to build without them

The 10x productivity claim is real in a narrow sense: if you're writing boilerplate, converting between formats, writing documentation, or working in a well-understood problem space, AI tools genuinely accelerate that work.

If you're doing genuinely novel technical work — designing something that hasn't been designed before, debugging something subtle, making architectural decisions with real tradeoffs — the speedup is much smaller. The thinking is still yours.

Use them accordingly.


What does your AI workflow look like day-to-day? Curious if there are tools I haven't tried that I'm missing.