Who Owns the Code Claude Wrote? The Legal Mess No One's Talking About

Who Owns the Code Claude Wrote? The Legal Mess No One's Talking About

# ai# llm# programming# discuss
Who Owns the Code Claude Wrote? The Legal Mess No One's Talking AboutAndrew Kew

Anthropic accidentally published 512,000 lines of Claude Code's source in a routine update. Before...

Anthropic accidentally published 512,000 lines of Claude Code's source in a routine update. Before sunrise, GitHub mirrors. Before breakfast, someone had rewritten the whole thing in Python. Then came 8,000 DMCA takedowns — for code that Anthropic's own lead engineer said was predominantly written by Claude itself.

Can you issue a DMCA takedown for code copyright law may not protect? Nobody had a clean answer. That's the problem.

"If Claude Code was, by Anthropic's own lead engineer's admission, predominantly written by Claude itself, does Anthropic even own it?"

The same question applies to your codebase.

Three risks in your codebase

Three separate legal risks are colliding right now, and most engineers are only dimly aware of any of them:

  • Copyrightability — the US Copyright Office and DC Circuit (upheld after the Supreme Court declined to hear Thaler in March 2026) are clear: AI-generated work without meaningful human authorship is not copyrightable. Code you accepted verbatim from Claude may sit in the public domain in everything but name.
  • Work-for-hire — your employment contract almost certainly already assigned anything you build at work to your employer. AI-assisted or not, that doctrine applies. Worse: if your employer licenses Claude Code and you use it for a side project, a broad IP clause may reach that too.
  • GPL contamination — AI tools are trained on mountains of copyleft-licensed code. If the model reproduced a substantial verbatim chunk of GPL code in its output and you shipped it commercially, you may have a copyleft violation you can't see. "I didn't know" is not a defense.

Why it matters now

The legal edges are actively moving. Allen v. Perlmutter (600 detailed prompts + Photoshop edits — still unresolved) will be the closest thing yet to a ruling on how much human involvement is "enough." Doe v. GitHub in the Ninth Circuit is asking whether Copilot reproduces licensed code without attribution — it's already changed industry behavior: Copilot added duplicate detection filters; M&A due diligence now routinely includes an AI codebase license scan.

The place where unsettled law becomes concrete today isn't court — it's acquisition due diligence and fundraising, where investors are already asking these questions as a condition of closing.

What to do

1. Run a license scan. FOSSA, Snyk Open Source, or Black Duck. One afternoon, costs less than the first hour of a copyright dispute. If you're shipping a commercial product and haven't done this, you're operating on assumption.

2. Document human creative decisions as you go. "Restructured Claude's module architecture, rejected initial state management approach, rewrote error handling from scratch" is legal evidence. "Add rate limiting module" is not. Export your prompt logs from agentic sessions where you made architectural calls.

3. Read your IP clause before you build anything on the side. Search your employment contract for "intellectual property," "IP assignment," or "work product." The phrase to watch: "any software created with the assistance of company-licensed tools." If your employer licenses Claude Code, that clause may reach your weekend project.

4. Check your Anthropic plan. Consumer/Pro plans have narrower IP indemnification than API/Enterprise. If you're shipping commercially on the free tier, the gap is real.


Source: Legal Layer — Who Owns the Code Claude Wrote?

✏️ Drafted with KewBot (AI), edited and approved by Drew.