Your AI agent is lying to you — and your CLAUDE.md is why

# claudecode# cursor# ai# productivity
Your AI agent is lying to you — and your CLAUDE.md is whyketerslayter

TL;DR — Your AI coding agent's quality is capped by the quality of its context. Most devs have...

TL;DR — Your AI coding agent's quality is capped by the quality of its context. Most devs have stale, generic, or missing CLAUDE.md / .cursorrules files. Caliber fixes this in one command — scans your repo, generates tailored configs, recommends MCPs, and gives you a 0-100 setup score.


The problem nobody talks about

Everyone's debating which AI coding tool is best — Claude Code vs Cursor vs Codex. Meanwhile, the real reason most developers aren't getting great results is upstream of all of them:

Their project context is wrong.

Your CLAUDE.md was written in 20 minutes when you first set up the project. Your .cursorrules was copy-pasted from a Reddit thread. Neither has been touched since.

And your AI agent is making decisions based on that stale, inaccurate information every single session.

What bad context actually looks like

Here are the most common failure modes I've seen:

🚨 Stale architecture — Your CLAUDE.md says you use REST APIs but you migrated to GraphQL 3 months ago. The agent keeps generating REST patterns.

🚨 Contradictory rules — Old rules say "use CommonJS", newer ones say "use ESM". The agent picks one arbitrarily.

🚨 No MCP coverage — You're running PostgreSQL and there's a great Postgres MCP that would let your agent query your schema directly. You've never heard of it.

🚨 Config drift — You refactor every week. Your AI config was updated once, on day one.

🚨 Team inconsistency — One dev has MCPs set up, another doesn't. Rules differ across machines. There's no source of truth.

Meet Caliber

Caliber is a CLI tool that solves this with one command. It scans your project and:

  • ✅ Generates a tailored CLAUDE.md with your actual stack, architecture, and commands
  • ✅ Creates .cursorrules / .cursor/rules/*.mdc matching your dependencies
  • ✅ Recommends MCPs you should install based on what you're running
  • ✅ Deletes stale rules that contradict your current code
  • ✅ Scores your setup 0–100 across 6 dimensions

And it can run continuously — so configs stay fresh as your code evolves.

See it in action

Here's what caliber init looks like on a real Next.js + TypeScript project:

$ caliber init

Scanning project structure...
Detected: TypeScript, React, Next.js, Tailwind CSS
Detected: 847 files, 12 dependencies with AI relevance

Config files:
+ create  CLAUDE.md              project context
+ create  .cursorrules           cursor rules  
~ modify  .cursor/rules/testing.mdc    outdated patterns
- delete  .claude/rules/old-api.md     stale, contradicts code

Skills:
+ create  .claude/skills/deploy.md    deploy flow
+ create  .cursor/skills/review.md    code review

MCP Recommendations:
  • @modelcontextprotocol/server-postgres  (detected: pg in package.json)
  • @modelcontextprotocol/server-github    (detected: .git, gh-cli)
  • @upstash/context7-mcp                 (detected: React 18+)
Enter fullscreen mode Exit fullscreen mode

Notice it's not just adding files — it also deletes stale ones and modifies outdated patterns.

The setup score: caliber score

This is my favorite feature. Run caliber score on any project and get a deterministic 0–100 grade — no LLM needed, works offline.

$ caliber score

Score:  87/100   Grade: A

Existence    23/25  • Config files present, cross-platform parity
Quality      22/25  • Commands documented, no bloat, no vague rules
Coverage     18/20  • Dependencies & services reflected in configs
Accuracy     13/15  • Documented commands & paths actually exist
Freshness     8/10  • Config recency, no leaked secrets
Bonus         3/5   • Hooks, AGENTS.md, OpenSkills format
Enter fullscreen mode Exit fullscreen mode

This gives you an objective baseline before onboarding a new dev, switching AI tools, or auditing your setup.

The full CLI — 4 commands, that's it

Command What it does
caliber init Scan repo, generate/update all config files
caliber score Rate your setup 0–100 (offline, no LLM)
caliber recommend Discover MCPs and skills for your stack
caliber config Set provider, API key, model

Works with: Claude Code, Cursor, OpenAI Codex
No API key needed with Claude Code or Cursor — uses your existing subscription.

Why this matters for teams

This isn't just a solo-dev tool. The consistency problem is worse at the team level:

"One dev has MCPs configured, another doesn't. Cursor rules differ across machines. Nobody knows which CLAUDE.md is the canonical one."

With Caliber, you commit your configs to git like any other file. Every developer who runs caliber init gets the same baseline — and the same AI agent experience. New team members are set up in 30 seconds.

Get started in 30 seconds

npm install -g @rely-ai/caliber
caliber init
Enter fullscreen mode Exit fullscreen mode

That's it. No API key needed if you're on Claude Code or Cursor.

🔗 GitHub: https://github.com/rely-ai-org/caliber
📦 npm: https://www.npmjs.com/package/@rely-ai/caliber
💬 Discord: https://discord.gg/XUNaJEsw

MIT licensed. Open source. Your code never leaves your machine.


Built at Rely AI. If this saved you time, a ⭐ on GitHub goes a long way — and PRs are very welcome.