
Avi FeneshAI coding tools don't validate their own configs. Skills fail silently, hooks do nothing, and cross-tool conflicts go undetected. I built a linter for that.
I've spent the last year wiring up AI coding tools: Claude Code, Cursor, Copilot, Cline, Codex CLI, and friends. Skills, hooks, memory files, MCP servers, agent defs, plugin manifests — the whole stack.
Here's the annoying truth: a huge chunk of these configs are silently broken. And the tools mostly don't tell you.
If you misconfigure ESLint, it screams. If you misconfigure a SKILL.md, nothing happens. The skill just… never triggers. No error. No warning. It's like the file doesn't exist.
Vercel even measured this: skills invoke at 0% without correct syntax. Not "less often". Zero. One wrong frontmatter field and your carefully written skill is invisible to the agent.
And then you get the classic "almost right" output. Stack Overflow's 2025 developer survey has 66% of devs calling "almost right" their biggest AI frustration — and honestly, misconfigured agents produce exactly that. You think you gave the model the right rules/tools, but you didn't (or you did, just in the wrong format).
It gets worse if you use multiple tools. Cursor for editing, Claude Code for terminal work, maybe Copilot for inline completions. Now you're maintaining parallel configs in different formats that are supposed to stay consistent. A rule that works in .cursor/rules/testing.mdc might contradict what you put in CLAUDE.md. Nobody catches it.
After digging through official specs, research, and a lot of "why is this not working?" debugging, the failure modes repeat:
Skills that never trigger:
PascalCase instead of kebab-case
Skills that trigger when they shouldn't:
disable-model-invocation: true — Claude can auto-trigger it without you askingHooks that quietly fail:
PreToolExecution instead of PreToolUse)command missing (so the hook is basically a comment)rm -rf in a hook with no guard… yeah)Memory files that make the agent worse:
Cross-tool conflicts:
CLAUDE.md says npm test but your AGENTS.md says pnpm test — one agent runs tests correctly, the other doesn'tBash, but your CLAUDE.md disallows it — the agent's permissions depend on which tool you're usingMCP servers with protocol violations:
I couldn't find a tool that just tells you, plainly, across tools: "this config won't work".
So I made one.
agnix is a linter for AI agent configurations. It validates skills, hooks, memory files, plugins, MCP configs, and agent definitions across Claude Code, Cursor, GitHub Copilot, Cline, Codex CLI, OpenCode, and Gemini CLI.
It's currently 156 rules. Every rule links to its source — an official spec, vendor docs, or research paper — with RFC 2119 severity levels (MUST vs SHOULD vs BEST_PRACTICE) so you know what's a hard requirement vs a recommendation. It's not "trust me bro".
What it looks like:
$ npx agnix .
CLAUDE.md:15:1 warning: Generic instruction 'Be helpful and accurate' [fixable]
help: Remove generic instructions. Claude already knows this.
.claude/skills/review/SKILL.md:3:1 error: Invalid name 'Review-Code' [fixable]
help: Use lowercase letters and hyphens only (e.g., 'code-review')
.claude/settings.json:12:5 error: Script file not found: './scripts/lint.sh'
help: Create the file or fix the path
.cursor/rules/testing.mdc:1:1 error: Missing required frontmatter
help: Add YAML frontmatter with description, globs, and alwaysApply fields
Found 3 errors, 1 warning
2 issues are automatically fixable
hint: Run with --fix to apply fixes
It also auto-fixes.
agnix --fix . rewrites configs to comply with specsagnix --fix-safe . applies only high-certainty fixes (things like normalizing a skill name to kebab-case — not changes that might alter semantic meaning)Claude Code:
CLAUDE.md, hooks, agents, plugins, skillsCursor:
.cursor/rules/*.mdc, .cursorrules
alwaysApply
GitHub Copilot:
.github/copilot-instructions.md, .github/instructions/*.instructions.md
applyTo frontmatter in scoped instructions, invalid glob patterns, file length limitsMCP:
*.mcp.jsonAGENTS.md:
AGENTS.md, AGENTS.local.md
Cline:
.clinerules, .clinerules/*.md
Codex CLI:
.codex/config.tomlOpenCode:
opencode.jsonGemini CLI:
GEMINI.mdThere are also cross-platform rules (XP-*) that detect conflicts between tools, and prompt-engineering rules (PE-*) that catch patterns that consistently degrade behavior.
It's written in Rust.
.gitignore)SKILL.md, .mdc, settings.json, etc.)Typical performance: single file under 10ms, 100-file project ~200ms, 1000-file project ~2s.
The rules stay current — a CI workflow monitors upstream specs weekly and flags when vendor documentation drifts from what agnix expects.
agnix also runs as an LSP, so you get real-time validation in your editor:
There's also an MCP server (agents can lint their own configs), plus a GitHub Action for CI:
- name: Validate agent configs
uses: avifenesh/agnix@v0
with:
target: 'claude-code'
npx agnix .
That's it. Run it on your repo. In my experience, almost every repo with agent configs has a few issues — and they're usually the silent ones that have been dragging quality down for weeks.
If you want a "real" install:
npm install -g agnix # npm
brew tap avifenesh/agnix && brew install agnix # Homebrew
cargo install agnix-cli # Cargo
MIT/Apache-2.0, open source: github.com/avifenesh/agnix
The ecosystem is fragmenting fast. Every tool invented its own format, its own conventions, and its own special ways to fail quietly.
We have linters for code. For configs. For IaC. But the layer that tells AI agents how to behave — the stuff sitting between you and basically every AI interaction — had nothing.
agnix doesn't tell you what to write. It tells you when what you wrote won't work.