
beefed.aiIntegrate automated performance and accessibility checks (Lighthouse CI, axe-core, bundle analysis) into your frontend pipeline to prevent regressions
Automated checks for performance and accessibility belong in your CI as non‑negotiable quality gates — they catch regressions while fixes are cheap instead of after customers notice them. Treat Lighthouse CI, axe-core, and bundle analyzers as a layered safety net that stops bad commits from ever reaching production.
The team symptom looks familiar: a small change lands, conversions drop, engineers scramble, and legal/audit work surfaces an accessibility defect that slipped through. The root causes are predictable — no performance budget, only ad‑hoc manual accessibility checks, and no automated bundle limits — but the remediation cost grows by orders of magnitude the longer it remains in production.
Track the metrics that map to real user perceptions: Largest Contentful Paint (LCP), Interaction to Next Paint (INP) (the replacement for FID), and Cumulative Layout Shift (CLS) — these are the Core Web Vitals most strongly correlated with user satisfaction. Measure them in the field at the 75th percentile and use lab proxies for early validation.
| Metric | What it measures | Lab or field | Good threshold (75th pct) | Why it predicts UX |
|---|---|---|---|---|
| LCP | Time until main content paints | Field & lab | ≤ 2.5 s. | Perceived load speed; slow LCP loses users. |
| INP | Responsiveness across interactions | Field; use TBT as lab proxy | ≤ 200 ms. | Interaction latency across session; replaces FID. |
| CLS | Visual stability (unexpected shifts) | Field & lab | < 0.1 | Jank/shift frustrates users and breaks flows. |
| FCP / TTFB | Early paint and server response | Lab & field | FCP ≤ 1.8 s, TTFB ≤ 800 ms (guide) | Useful diagnostics and prioritization. |
| Bundle size & third‑party requests | Bytes and requests shipped to client | Build-time & lab | Team-defined budgets (use size-limit) |
Large bundles increase parse/execute time and TBT. |
A few operational rules that cut through noise:
Think of the pipeline as layered checks that get progressively heavier and more expensive to run:
jest-axe accessibility assertions for components; quick size-limit checks against a baseline bundle size. These run in milliseconds–minutes and fail fast.
@axe-core/playwright or axe-playwright to scan rendered pages and attach HTML reports; run size-limit --why or webpack-bundle-analyzer for a treemap when size changes.
lhci autorun or a GitHub Action) with performance budgets and LHCI assertions; upload artifacts to an LHCI server or temporary storage for trend tracking. Run multiple Lighthouse runs to avoid flakiness.
Concrete roles (short):
budget.json), assertions that can fail CI. Use lhci autorun for automated collect → assert → upload flows.
Example: lighthouserc.json with assertions (use in LHCI or via the Action). Replace numbers with values your product can meet:
{
"ci": {
"collect": {
"staticDistDir": "./dist",
"numberOfRuns": 3,
"settings": { "formFactor": "mobile" }
},
"assert": {
"assertions": {
"categories:performance": ["error", { "minScore": 0.85 }],
"largest-contentful-paint": ["error", { "maxNumericValue": 2500 }],
"cumulative-layout-shift": ["error", { "maxNumericValue": 0.1 }]
}
},
"upload": { "target": "temporary-public-storage" }
}
}
Reference: lhci supports collect, assert, and upload blocks and autorun which composes them. Use numberOfRuns to reduce flakiness.
Run component accessibility checks with jest-axe:
// example.test.jsx
import { render } from '@testing-library/react';
import { axe, toHaveNoViolations } from 'jest-axe';
import MyComponent from './MyComponent';
expect.extend(toHaveNoViolations);
test('MyComponent has no automated a11y violations', async () => {
const { container } = render(<MyComponent />);
const results = await axe(container);
expect(results).toHaveNoViolations();
});
For page-level E2E, use Playwright + Axe:
// a11y.spec.js
import { test } from '@playwright/test';
import AxeBuilder from '@axe-core/playwright';
test('Landing page accessibility scan', async ({ page }) => {
await page.goto('https://staging.example.com/');
const results = await new AxeBuilder({ page }).analyze();
if (results.violations.length) {
console.error('axe violations:', results.violations);
// Fail the test so CI flags the PR
throw new Error(`${results.violations.length} accessibility violations found`);
}
});
Sources for these integrations and packages are in the references. .
A practical gating strategy that balances speed and safety:
Fast pre‑merge checks (PR): run unit + jest-axe component tests, run size-limit against a baseline, run static ESLint a11y rules. These should fail the PR immediately on regressions. The goal is immediate feedback inside the PR discussion.
Preview/staging checks (on preview URL or ephemeral environment): run Playwright + Axe scans and a single LHCI run (or treosh/lighthouse-ci-action) with runs: 3. Post reports/artifacts into the PR for engineers to inspect.
Merge gating: enforce that the LHCI assertions or performance budgets on canonical pages pass on the staging environment (or main branch deploy). For thresholds that are too brittle, set them to warn on PRs and error on merges to main. Use lhci's assert configuration to declare these rules.
Post-merge monitoring: rely on RUM (web‑vitals + analytics or a RUM provider) for field regressions and set alerts on the 75th percentile deviations for core pages. Field monitoring catches issues that lab runs cannot.
Example GitHub Actions composition (skeleton):
name: PR checks
on: [pull_request]
jobs:
unit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: node-version: 18
- run: npm ci
- run: npm test -- --ci
size:
runs-on: ubuntu-latest
needs: unit
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: node-version: 18
- run: npm ci
- run: npm run build
- run: npx size-limit
lighthouse:
runs-on: ubuntu-latest
needs: [unit, size]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: node-version: 18
- run: npm ci
- run: npm run build
- name: Run Lighthouse CI (quick)
uses: treosh/lighthouse-ci-action@v12
with:
urls: ${{ steps.preview.outputs.url || 'https://staging.example.com' }}
runs: 3
configPath: ./.lighthouserc.json
uploadArtifacts: true
Key operational points:
size-limit in PRs to detect dependency additions quickly; it can comment on PRs and block merges. runs: 3 for Lighthouse to reduce flakiness and average results; persist .lighthouseci artifacts for debugging.
Gating is only effective with clear signals and an action workflow. Make every automated failure produce an actionable item:
Important: Automated scanners catch many problems but not everything.
axe-corefinds an average portion of programmatic WCAG violations — use its output to prioritize real human validation and manual audits on complex interactions.
Suggested triage matrix (example):
| Severity | Trigger | Example action |
|---|---|---|
| Blocker | Production LCP > 4s on landing OR axe critical failures on checkout |
Stop deploy rollback + urgent fix sprint |
| High | LCP regression > 25% on important pages OR new a11y violations on CTAs | Sprint priority; assign to FE owner |
| Medium |
size-limit exceeded by > 15% or additional third‑party > 2 |
Schedule refactor; analyze treemap |
| Low | Minor contrast / lab-only Lighthouse warnings | Queue for next sprint |
Use RUM and dashboards for continuous monitoring:
web-vitals in production and push metrics to your analytics or a BigQuery / Looker Studio pipeline; alert on deviation of the 75th percentile on key pages.
Follow this checklist to move from ad‑hoc to automated, in order:
Add component unit a11y checks:
jest-axe and include expect.extend(toHaveNoViolations) in setupTests.Add bundle size gating:
size-limit and create a size-limit section; add npm run size to test or ci. size job to your PR workflow and (optionally) the size-limit GitHub Action to comment on new PRs. Add page-level accessibility E2E:
@axe-core/playwright to Playwright tests; attach JSON/HTML reports to CI. Add Lighthouse CI for staging:
.lighthouserc.json with collect.numberOfRuns and assert blocks, and add treosh/lighthouse-ci-action to run against a staging/preview URL. Use budget.json to enforce resource budgets.
Instrument RUM:
web-vitals and send onLCP, onINP, onCLS to your analytics endpoint; set alerts on 75th percentile deltas on key pages. Copy‑paste examples (quick):
.lighthouserc.json
{
"ci": {
"collect": { "staticDistDir": "./dist", "numberOfRuns": 3 },
"assert": {
"assertions": {
"largest-contentful-paint": ["error", { "maxNumericValue": 2500 }],
"cumulative-layout-shift": ["error", { "maxNumericValue": 0.1 }]
}
},
"upload": { "target": "temporary-public-storage" }
}
}
package.json excerpt for size-limit
{
"scripts": {
"build": "next build",
"size": "npm run build && size-limit"
},
"size-limit": [
{ "path": "build/static/js/*.js", "limit": "200 kB" }
]
}
Lighthouse CI Action (PR job snippet)
- name: Audit URLs using Lighthouse
uses: treosh/lighthouse-ci-action@v12
with:
urls: |
${{ steps.preview.outputs.url }}
configPath: ./.lighthouserc.json
runs: 3
uploadArtifacts: true
Playwright + Axe job (snippet)
- name: Run Playwright accessibility tests
run: npx playwright test --project=chromium tests/a11y.spec.js
Use these building blocks to make regressions visible where they matter, fast.
Sources:
Web Vitals — web.dev - Definitions and recommended thresholds for Core Web Vitals (LCP, INP, CLS) and advice about lab vs. field measurement.
Lighthouse CI Configuration - lighthouserc structure, lhci autorun, collect/assert/upload and flags.
treosh/lighthouse-ci-action (GitHub) - GitHub Action to run Lighthouse CI, examples with budgetPath, runs, and configPath.
dequelabs/axe-core (GitHub) - axe-core overview, the practical detection capabilities and recommended usage in tests.
dequelabs/axe-core-npm: @axe-core/playwright (GitHub) - Playwright integration package for axe-core (AxeBuilder API).
ai/size-limit (GitHub) - size-limit docs and patterns for enforcing bundle size/time budgets and CI integration.
webpack-bundle-analyzer (npm / docs) - Treemap visualization and CLI/plugin usage to inspect bundle contents.
Core Web Vitals workflows with Google tools — web.dev - Guidance on using CrUX, PageSpeed Insights, Lighthouse CI, and RUM for monitoring and trends.
Total Blocking Time (TBT) — web.dev - TBT explained and its relation to INP as a lab proxy.
web-vitals (npm) - RUM library (onLCP, onINP, onCLS) and example instrumentation for production.
jest-axe (GitHub) - Jest matcher and examples for component-level accessibility assertions using axe.