I Built a Starter Repo That Turns AI Coding Tools Into Senior Engineers

I Built a Starter Repo That Turns AI Coding Tools Into Senior Engineers

# ai# development# python# opensource
I Built a Starter Repo That Turns AI Coding Tools Into Senior EngineersHumza Tareen

I Built a Starter Repo That Turns AI Coding Tools Into Senior Engineers Every new project...

I Built a Starter Repo That Turns AI Coding Tools Into Senior Engineers

Every new project starts the same way: create a repo, write some code, realize you have no CI, no linting, no test enforcement, no issue templates. Three weeks in, you're debugging a production incident with no runbook, your AI coding assistant is generating code that doesn't follow your conventions, and half the team is committing directly to main.

I've set up development practices across multiple production projects — from a multi-cluster AI evaluation platform on GCP to Python evaluation pipelines. The same patterns kept working: Ruff for linting with auto-fix in CI, pre-commit hooks that catch problems before they reach the remote, test coverage enforcement that posts PR comments telling you exactly which test files to create, and incident runbooks that turn 2-hour debugging sessions into 15-minute triage flows.

So I extracted all of it into a single, clone-and-go starter repo. But I added something the original projects didn't have: instruction files for every major AI coding tool — Cursor, Claude Code, Codex, and GitHub Copilot — configured to follow the same practices the CI enforces.

The repo: github.com/humzakt/dev-starter-kit


AI Tools Without Context Are Junior Developers

AI coding tools are powerful, but out of the box they don't know your project's conventions. They'll use single quotes when your linter expects double quotes. They'll skip tests. They'll commit to main. They'll generate 200-character lines when your config says 120. They'll catch generic exceptions when your style guide says to be specific.

The fix isn't to stop using AI tools — it's to give them the same onboarding document you'd give a new team member. That's what AGENTS.md, CLAUDE.md, and .cursor/rules/ files are for. They turn your AI assistant from a context-free code generator into something that understands your project's architecture, follows your conventions, and runs the right commands after every change.


What's in the Starter Kit

CI/CD Workflows

Three GitHub Actions workflows that work together:

PR Checks is the main quality gate. It runs Ruff with --fix, commits any auto-fixes back to your branch, then verifies the code is clean. After linting passes, it runs pytest. The interesting part is the test coverage enforcement: it detects which Python source files you changed, checks whether you also added or updated the corresponding test files, and if you didn't, it posts a PR comment with the exact test file names, class names, and method signatures you need to write.

# The CI posts comments like this on your PR:
#
# Test Coverage Required
#
# Source: src/services/auth.py
# Test file to create/update: tests/test_auth.py
# Test class: TestAuth
# Required test methods:
#     def test_validate_token(self):  ...
#     def test_refresh_session(self):  ...
Enter fullscreen mode Exit fullscreen mode

This isn't a coverage percentage check — it's a structural check that ensures you're at least thinking about tests for the code you changed. You can bypass it with a skip-test-check label for config-only PRs.

Lint & Format runs on both pushes and PRs, but only checks files that actually changed. It validates YAML files too (excluding workflow files, which have their own validation).

Merge Readiness is the final gate. It runs strict lint (no auto-fix), Python syntax compilation, YAML validation, and tests. All checks must pass before the merge button lights up. Skipped checks (because no relevant files changed) count as passing.

Pre-commit Hooks

The .pre-commit-config.yaml catches problems before they even reach CI:

  • Ruff lint and format on every commit
  • Syntax checks — Python AST verification, YAML, JSON, TOML validation
  • Hygiene — trailing whitespace, line endings, end-of-file fixers
  • Security — private key detection, large file prevention
  • Branch protection — blocks direct commits to main

Issue Templates

Three structured templates that guide reporters through providing the right information:

  • Bug Report — component dropdown, priority, reproduction steps, error output, environment info
  • Incident Report — SEV-1 through SEV-4 severity, affected components checklist, triage checklist, timeline table, root cause, action items
  • Infrastructure Issue — CI/CD failures, dependency issues, environment config, investigation checklist

Incident Runbook

The docs/INCIDENT_RUNBOOK.md is a structured playbook for when things go wrong:

  • Severity classification table with response time targets
  • 3-phase triage checklist (Identify Scope, Gather Evidence, Communicate)
  • Component-specific diagnostic commands you can copy-paste
  • CI/CD troubleshooting section
  • 4 common failure scenarios with diagnosis and fixes
  • Post-incident review template

The AI Tool Configuration Layer

This is the part that makes the starter kit different from other project templates. Every major AI coding tool reads a different file format for project instructions. The starter kit includes all of them:

File Tool Purpose
AGENTS.md Codex, Cursor, Claude Code, Windsurf Universal agent instructions
CLAUDE.md Claude Code Session-persistent instructions
.cursor/rules/*.mdc Cursor IDE Modular, glob-scoped rules
.claude/rules/*.md Claude Code Scoped rules with patterns
.github/copilot-instructions.md GitHub Copilot Code generation guidelines

AGENTS.md: The Universal Standard

AGENTS.md is the broadest-reaching file. It's read by Codex, Cursor, Claude Code, and other AI agents. According to recent evaluations, AGENTS.md achieves a 100% pass rate for providing AI agents with project context. It includes build commands, code style conventions, testing rules, the Git workflow, project architecture, and the development loop AI agents should follow.

CLAUDE.md: Session-Persistent Instructions

Claude Code reads CLAUDE.md at the start of every session. It's kept intentionally concise — under 100 lines — because Claude Code already has a large system prompt, and every line competes for attention. It focuses on the commands to run and the rules to follow.

Cursor Rules: Modular and Scoped

Cursor uses .cursor/rules/*.mdc files with YAML frontmatter that controls when each rule applies:

  • general.mdc (alwaysApply: true) — project conventions that apply everywhere
  • python.mdc (glob: **/*.py) — Python-specific rules
  • testing.mdc (glob: tests/**/*.py) — test structure and patterns

This modular approach means the AI only gets the relevant rules for the file it's editing.

What the Instructions Actually Teach

All instruction files converge on the same core behaviors:

  1. Read before editing — understand the existing code and its tests before making changes
  2. Run lint after every editruff check --fix . && ruff format .
  3. Run tests after every editpytest tests/ -v
  4. Fix failures before moving on — don't leave broken tests or lint errors
  5. Write tests for new code — the CI will enforce this anyway
  6. Use conventional commitsfeat:, fix:, chore:, etc.
  7. Never commit secrets — use environment variables

The result: your AI coding tool follows the same development loop a senior engineer would. It doesn't just generate code — it generates code that passes your CI.


How to Use It

Clone, delete the git history, and start fresh:

git clone https://github.com/humzakt/dev-starter-kit.git my-project
cd my-project
rm -rf .git
git init && git checkout -b main

python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
pre-commit install
Enter fullscreen mode Exit fullscreen mode

Customize by updating pyproject.toml (project name, lint rules), issue templates (component dropdowns), and AI tool files (architecture section, project-specific commands).


Design Decisions

Why multiple AI tool files instead of just AGENTS.md? AGENTS.md is the most broadly compatible, but each tool has its own file format with unique capabilities. Cursor's .mdc files support glob-based scoping. Claude Code's hierarchical system supports user-level, project-level, and file-level overrides. By including all formats, the starter kit works regardless of which tool your team uses.

Why structural test coverage instead of coverage percentage? Coverage percentage creates perverse incentives. The structural check is simpler: if you changed source files, did you also change test files? It doesn't check that the tests are good — that's what code review is for. It checks that you at least thought about testing.

Why auto-fix in CI? The alternative is rejecting PRs for formatting issues, which wastes everyone's time. CI fixes formatting and commits it back. Squash merging eliminates the extra commits.

Why pre-commit AND CI? Pre-commit catches issues locally. CI catches issues when pre-commit is bypassed. Belt and suspenders.


What I Learned Building This Across Multiple Projects

After implementing these practices in a multi-service GCP platform and a Python evaluation pipeline:

  • Auto-fix eliminated 90% of "fix formatting" follow-up commits
  • PR comment guidance is more effective than just a red X on a check
  • Incident runbooks pay for themselves the first time someone uses the triage checklist
  • AI tool instructions compound — every session where the AI follows your conventions saves correction time

The repo is MIT licensed. Clone it, customize it, ship it.

github.com/humzakt/dev-starter-kit


Humza Tareen builds production AI systems at scale. More articles at humzakt.github.io.