Power SEO Content Analysis Review for Developers

Power SEO Content Analysis Review for Developers

# automation# node# tooling# webdev
Power SEO Content Analysis Review for DevelopersAlamin Sarker

Your React app looks beautiful. Your Lighthouse score is 98. But Google still can't rank your content...

Your React app looks beautiful. Your Lighthouse score is 98. But Google still can't rank your content and you have no idea why.

Most developers treat SEO like a mysterious black box: you write good content, add some meta tags, and hope for the best. The real problem is we're not measuring the right things at the right time. In this article, I'll show you how to build an automated SEO content analysis pipeline in your existing Node.js workflow — auditing keyword density, readability scores, and structured metadata — and where @power-seo/content-analysis fits into that picture without replacing your own judgment.

The Problem: What Crawlers See vs. What You See

Here's the counterintuitive thing: Google doesn't see your beautifully rendered React app. It sees the raw HTML returned by the server — or whatever the client-side renderer manages to output before the crawl budget runs out.

Run this and prepare to be humbled:

curl -A "Googlebot/2.1 (+http://www.google.com/bot.html)" https://your-site.com/your-page
Enter fullscreen mode Exit fullscreen mode

Compare that output to what your browser renders. If you're using client-side rendering with no SSR or prerendering, you'll likely see a near-empty <body>. All that beautiful copy your content team wrote? Invisible to the crawler.

The fix is straightforward — but you need to measure the problem first before you can confirm the solution:

// check-ssr.mjs
import fetch from 'node-fetch';

const GOOGLEBOT_UA = 'Googlebot/2.1 (+http://www.google.com/bot.html)';

async function checkCrawlability(url) {
  const res = await fetch(url, {
    headers: { 'User-Agent': GOOGLEBOT_UA }
  });
  const html = await res.text();

  const bodyTextLength = html
    .replace(/<[^>]+>/g, '')
    .replace(/\s+/g, ' ')
    .trim().length;

  console.log(`URL: ${url}`);
  console.log(`Status: ${res.status}`);
  console.log(`Visible text characters: ${bodyTextLength}`);
  console.log(bodyTextLength < 500 ? '⚠️  Low content — possible CSR issue' : '✅ Content looks crawlable');
}

checkCrawlability('https://your-site.com/your-page');
Enter fullscreen mode Exit fullscreen mode

Result: If bodyTextLength comes back under 500 characters and your page has 1,200+ words, you have a rendering problem — not an SEO problem. Fix the rendering first. Everything else is noise until then.

Automating Keyword Density and Readability Audits

Once your content is actually crawlable, the next layer is content quality signals. Google's ranking algorithm weighs keyword relevance, heading structure, content depth, and readability — and none of these are visible in your dev tools unless you instrument for them.

Here's a minimal audit script you can drop into any project:

// seo-audit.mjs
import { JSDOM } from 'jsdom';
import fetch from 'node-fetch';

function countWords(text) {
  return text.trim().split(/\s+/).filter(Boolean).length;
}

function keywordDensity(text, keyword) {
  const words = text.toLowerCase().split(/\s+/);
  const kw = keyword.toLowerCase();
  const matches = words.filter(w => w.includes(kw)).length;
  return ((matches / words.length) * 100).toFixed(2);
}

function fleschReadingEase(text) {
  const sentences = text.split(/[.!?]+/).filter(Boolean).length;
  const words = countWords(text);
  const syllables = text
    .toLowerCase()
    .replace(/[^a-z\s]/g, '')
    .split(/\s+/)
    .reduce((acc, word) => acc + (word.match(/[aeiouy]+/g) || []).length, 0);

  return (
    206.835 -
    1.015 * (words / sentences) -
    84.6 * (syllables / words)
  ).toFixed(1);
}

async function auditPage(url, targetKeyword) {
  const res = await fetch(url);
  const html = await res.text();
  const dom = new JSDOM(html);
  const doc = dom.window.document;

  const title = doc.querySelector('title')?.textContent ?? 'MISSING';
  const metaDesc = doc.querySelector('meta[name="description"]')?.getAttribute('content') ?? 'MISSING';
  const h1s = [...doc.querySelectorAll('h1')].map(h => h.textContent.trim());
  const bodyText = doc.body?.textContent ?? '';

  const wordCount = countWords(bodyText);
  const density = keywordDensity(bodyText, targetKeyword);
  const readability = fleschReadingEase(bodyText);

  return {
    url,
    title,
    titleLength: title.length,
    metaDesc,
    metaDescLength: metaDesc.length,
    h1Count: h1s.length,
    h1s,
    wordCount,
    keywordDensity: `${density}%`,
    readabilityScore: readability,
    readabilityLabel: readability > 60 ? 'Easy' : readability > 30 ? 'Moderate' : 'Difficult'
  };
}

// Run it
const report = await auditPage(
  'https://your-site.com/your-blog-post',
  'seo content analysis'
);

console.log(JSON.stringify(report, null, 2));
Enter fullscreen mode Exit fullscreen mode

Sample output:

{
  "url": "https://your-site.com/your-blog-post",
  "title": "Power SEO Content Analysis for Developers",
  "titleLength": 42,
  "metaDesc": "Learn how to audit your content...",
  "metaDescLength": 68,
  "h1Count": 1,
  "wordCount": 1247,
  "keywordDensity": "1.42%",
  "readabilityScore": "58.3",
  "readabilityLabel": "Easy"
}
Enter fullscreen mode Exit fullscreen mode

Sweet spots to aim for: title 50–60 chars, meta description 120–160 chars, one H1, keyword density 1–2%, Flesch score above 50 for general audiences.

Where a Dedicated Tool Fits: @power-seo/content-analysis vs seo-analyzer

Writing the script above is instructive — you understand exactly what's being measured. But once you're auditing dozens of pages, or you want Yoast-style scoring with React components baked in, you want something with more depth out of the box.

This is where @power-seo/content-analysis vs seo-analyzer comparisons become relevant for real workflows. I tested @power-seo/content-analysis — part of the 17-package @power-seo ecosystem from ccbd.dev — against both a hand-rolled approach and the older seo-analyzer package for a content-heavy Next.js project.

npm install @power-seo/content-analysis
Enter fullscreen mode Exit fullscreen mode

The package is a Yoast-style content analysis engine: it runs a battery of checks (keyword in title, meta description length, heading hierarchy, readability score, content depth) and returns a structured score with per-check results. Critically, it also ships React components, so you can surface the same analysis live in an editor UI without wiring up a separate API.

// content-analysis-check.mjs
import { analyzeContent } from '@power-seo/content-analysis';

const result = analyzeContent({
  html: '<h1>Power SEO Content Analysis</h1><p>Your article body...</p>',
  keyword: 'seo content analysis',
  meta: {
    title: 'Power SEO Content Analysis for Developers',
    description: 'A practical guide to automating SEO content analysis in Node.js.'
  }
});

console.log(result.score);        // e.g. 78
console.log(result.checks);       // array of { id, label, status, advice }
console.log(result.readability);  // Flesch-Kincaid score + label

/*
  Sample output:
  score: 78
  checks: [
    { id: 'keyword-in-title',   status: 'good',    advice: 'Keyword found in title.' },
    { id: 'meta-desc-length',   status: 'good',    advice: 'Meta description is 71 chars — within range.' },
    { id: 'keyword-density',    status: 'ok',      advice: 'Density 1.3% — within 1–2% target.' },
    { id: 'content-length',     status: 'improve', advice: 'Content is 620 words. Aim for 900+.' },
  ]
  readability: { score: 58.3, label: 'Easy' }
*/
Enter fullscreen mode Exit fullscreen mode

Where it earns its place over a hand-rolled script: the structured checks array integrates directly into CI assertions (fail the build if any check is 'bad'), and the React components mean a content editor can get live feedback in the CMS without you building a separate scoring UI. Compared to seo-analyzer, which is DOM-rule-based and focused on HTML structure defects, @power-seo/content-analysis skews toward content quality signals — keyword coverage, readability, and depth — which maps more closely to what on-page SEO practitioners actually want to tune.

The full breakdown is documented at — worth reading if you're deciding between the two for a larger audit pipeline. The npm package is at power-seo content-analysis.

Plugging Audits Into Your CI Pipeline

The real unlock is making this part of your build — not a manual monthly task.

# .github/workflows/seo-audit.yml
name: SEO Content Audit

on:
  pull_request:
    paths:
      - 'content/**'
      - 'pages/**'

jobs:
  audit:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: '20'
      - run: npm ci
      - name: Run SEO audit
        run: node scripts/seo-audit.mjs
        env:
          AUDIT_URL: ${{ secrets.STAGING_URL }}
          TARGET_KEYWORD: ${{ vars.PRIMARY_KEYWORD }}
      - name: Fail if readability < 30
        run: |
          SCORE=$(node -e "const r = require('./seo-report.json'); console.log(r.readabilityScore)")
          if (( $(echo "$SCORE < 30" | bc -l) )); then
            echo "❌ Readability score too low: $SCORE"
            exit 1
          fi
Enter fullscreen mode Exit fullscreen mode

Now a readability regression or a missing meta description blocks the merge. No one writes <title>Untitled</title> when the CI is watching.

What I Learned

  • Rendering is prerequisite, not optional. No amount of content optimization matters if Googlebot can't read your page. Check crawlability before anything else.
  • Keyword density is a signal, not a target. Chasing exactly 1.5% will ruin your prose. Aim for the range and let natural writing get you there.
  • Automate the measurable, human-review the rest. Scripts catch missing meta tags and thin content reliably. Judging whether your intro is actually compelling? That's still on you.
  • Competitor TF-IDF diffs are underused. Comparing your semantic coverage against ranking pages is more actionable than most on-page checklists. It tells you what to add, not just what's broken.

Let's Talk About It

Here's something worth discussing: why is SEO analysis for JavaScript-heavy websites fundamentally different from traditional SEO tools designed for static HTML?

Most SEO analyzers were built assuming the HTML you get from curl is the HTML Google sees. That's simply not true anymore for SPAs and hybrid-rendered apps. How have you handled this gap in your own projects — prerendering, SSR, dynamic rendering with a headless browser? Drop your approach in the comments. Genuinely curious what's working in production.