Anton AbyzovThere's a growing problem in AI tooling that nobody's talking about. Anthropic's Claude Code shipped...
There's a growing problem in AI tooling that nobody's talking about.
Anthropic's Claude Code shipped Skills — markdown files that define AI behavior. Instructions, conditionals, logic flow, all written in English. A SKILL.md is literally a program in human language.
This was a genuine paradigm shift. But there's a catch.
Skills are getting copied everywhere. People take a SKILL.md from one repository, paste it into another, change two lines, and call it their own. Skill marketplaces are filling up with near-identical copies. There's no extension mechanism. No way to build on someone else's work without forking it.
Sound familiar? This is exactly what happened before object-oriented programming solved inheritance. Developers copied entire functions, changed a few lines, maintained separate copies that diverged over time.
We solved this in OOP decades ago. It's time to solve it for AI skills.
I'm proposing a standard for how AI skills should be extended. The core idea comes from the SOLID Open/Closed Principle: "Open for extension, closed for modification."
SKILL.md is the base class. Stable. Tested. Version-controlled by the original author. You don't copy it. You don't fork it. You don't maintain your own version.
skill-memories/*.md are YOUR extensions. Like subclass overrides. Your team's conventions, your project's patterns, your rules. They sit in .claude/skill-memories/ and Claude reads them at runtime alongside the original skill.
Your rules override defaults. The original skill keeps getting updates from its author. You never have merge conflicts. This is inheritance for human-language programs.
Here's where it gets powerful. Skill memories aren't just preference files. They support three levels of progressive disclosure:
Level 1 — Simple rules (text)
### Form Handling
- Use React Hook Form for all forms
- Combine with Zod validation schemas
- Never use plain useState for form state
Level 2 — Conditional logic (branching)
### Component Generation
When generating React components:
1. Check design system first (src/components/ui/)
2. If component exists → import it, don't recreate
3. If logic exceeds 50 lines → extract to custom hooks
When user mentions "admin" or "dashboard":
- Add role-based access control checks
- Include audit logging
Level 3 — Reference external scripts and tools
### Validation Pipeline
Before completing any component:
1. Run `scripts/validate-design-tokens.sh` to check token compliance
2. Execute `npx chromatic --exit-zero-on-changes` for visual regression
3. If validation fails, fix and re-run before proceeding
Level 3 is huge. Your skill memory can reference scripts, tools, and pipelines. The AI executes them as part of its workflow. This is progressive disclosure applied to AI behavior — from simple text rules to full automation.
Day 1. You ask Claude to generate a login form. It produces useState with inline styles. Wrong for your codebase.
You correct it: "Use React Hook Form with Zod. Never use inline styles. Tailwind only."
SpecWeave's Reflect system captures that correction automatically and saves it to .claude/skill-memories/frontend.md.
Day 8. Different session. "Generate a signup form." Claude reads the skill memory. React Hook Form. Zod. Tailwind. No reminder needed. You programmed the AI in English. Once. Permanently.
As AI skill marketplaces grow, consumers need to know: can I extend this skill, or am I locked into copying it?
I propose a simple standard:
Every skill marketplace should adopt this distinction. Extensible skills are composable, maintainable, and respectful of the original author's work. Non-extensible skills create duplication debt.
For developers who think in OOP, the mapping is direct:
| OOP Concept | Extensible Skills |
|---|---|
| Base class | SKILL.md (core program) |
| Subclass override | skill-memories/*.md (your extensions) |
| Open extension | No controlled extension points — user can add or override anything (convention-based, not enforced) |
| Runtime polymorphism | Claude reads both and merges at runtime |
| Don't copy the class | Don't fork the skill |
| Extend the class | Write skill-memories |
This isn't a loose metaphor. It's a precise architectural pattern applied to programs written in human language.
Where every major AI coding tool stands today:
Claude Code — open programs. Readable. Extensible via skill-memories. Version-controlled in Git. This is a platform.
ChatGPT — custom instructions per conversation. Useful, but no version control, no composability, no extension mechanism. A preference, not a program.
GitHub Copilot — black box. No visibility into reasoning. Limited customization of behavior.
Cursor — proprietary with good UX. Behavior logic is locked. You use what they ship.
Claude Code and ChatGPT lead the market. But only one supports true extensibility today. The rest require copying.
I encourage every skill author, every marketplace, and every AI tool builder to adopt extensible skills:
Anthropic built the foundation with Claude Code Skills. SpecWeave proposes the extension standard. I hope Anthropic, OpenAI, and the broader community will adopt this pattern — or improve on it. The standard is open.
SpecWeave is open source under the MIT license. 68+ extensible skills included.
Repository: github.com/anton-abyzov/specweave
Documentation: spec-weave.com
The Extensible Skills standard: spec-weave.com/docs/guides/extensible-skills
Discord: discord.gg/UYg4BGJ65V
Stop copying skills. Start extending them.