
Alamin SarkerI spent an entire afternoon wondering why my freshly deployed Next.js app wasn't showing up in Google...
I spent an entire afternoon wondering why my freshly deployed Next.js app wasn't showing up in Google Search — not even for its own brand name. Turns out, I was server-rendering the shell but client-rendering all the meaningful content. Google saw a blank <div id="__next"></div> and moved on. The fix was restructuring a single page.tsx and adding 6 lines of metadata config. This guide covers everything I wish I'd known: metadata, dynamic OG tags, structured data, and automated auditing — all with copy-paste-ready code.
Traditional SEO wisdom assumes the server delivers complete HTML. Next.js gives you four rendering strategies — SSG, SSR, ISR, and CSR — and only some of them are Google-friendly by default.
Google's crawler can execute JavaScript, but it processes pages in a two-wave system. The first wave indexes raw HTML. The second wave (days later) renders JavaScript. If your <title>, meta description, and canonical tags live inside a useEffect, you're invisible until wave two — and sometimes wave two never comes for low-priority pages.
The baseline rule: anything Google needs to understand your page must be in the initial HTML response.
Next.js 14+ makes this straightforward with the Metadata API. Here's the correct pattern:
// app/blog/[slug]/page.tsx
import { Metadata } from 'next';
type Props = {
params: { slug: string };
};
// This runs on the SERVER — output lands in the initial HTML
export async function generateMetadata({ params }: Props): Promise<Metadata> {
const post = await fetchPost(params.slug); // your data fetcher
return {
title: post.title,
description: post.excerpt,
alternates: {
canonical: `https://your-site.com/blog/${params.slug}`,
},
openGraph: {
title: post.title,
description: post.excerpt,
url: `https://your-site.com/blog/${params.slug}`,
type: 'article',
publishedTime: post.publishedAt,
images: [{ url: post.ogImage, width: 1200, height: 630 }],
},
};
}
export default async function BlogPost({ params }: Props) {
const post = await fetchPost(params.slug);
return <article dangerouslySetInnerHTML={{ __html: post.content }} />;
}
Result: Every blog post gets a unique <title>, description, canonical URL, and OG image — all in the initial HTML, all crawlable on wave one.
Rich results (star ratings, FAQs, breadcrumbs in SERPs) come from structured data. Most Next.js tutorials stop at meta tags and never touch this. That's leaving SERP real estate on the table.
Add a JSON-LD component — don't use a third-party library for this, it's 20 lines of code:
// components/JsonLd.tsx
type JsonLdProps = {
data: Record<string, unknown>;
};
export function JsonLd({ data }: JsonLdProps) {
return (
<script
type="application/ld+json"
dangerouslySetInnerHTML={{ __html: JSON.stringify(data) }}
/>
);
}
Then use it in your page:
// app/blog/[slug]/page.tsx
import { JsonLd } from '@/components/JsonLd';
export default async function BlogPost({ params }: Props) {
const post = await fetchPost(params.slug);
const articleSchema = {
'@context': 'https://schema.org',
'@type': 'Article',
headline: post.title,
description: post.excerpt,
author: {
'@type': 'Person',
name: post.authorName,
url: `https://your-site.com/authors/${post.authorSlug}`,
},
datePublished: post.publishedAt,
dateModified: post.updatedAt,
image: post.ogImage,
publisher: {
'@type': 'Organization',
name: 'Your Site',
logo: { '@type': 'ImageObject', url: 'https://your-site.com/logo.png' },
},
};
return (
<>
<JsonLd data={articleSchema} />
<article dangerouslySetInnerHTML={{ __html: post.content }} />
</>
);
}
Test it with Google's. If your schema is valid, you'll see it eligible for enhanced SERP features within a few index cycles.
A static sitemap.xml file is a maintenance nightmare. Every time you add a page, you forget to update it. Next.js has built-in support for dynamic sitemaps — use it:
// app/sitemap.ts
import { MetadataRoute } from 'next';
export default async function sitemap(): Promise<MetadataRoute.Sitemap> {
const posts = await fetchAllPosts(); // your CMS/DB call
const blogUrls = posts.map(post => ({
url: `https://your-site.com/blog/${post.slug}`,
lastModified: new Date(post.updatedAt),
changeFrequency: 'weekly' as const,
priority: 0.8,
}));
return [
{
url: 'https://your-site.com',
lastModified: new Date(),
changeFrequency: 'daily',
priority: 1,
},
{
url: 'https://your-site.com/blog',
lastModified: new Date(),
changeFrequency: 'daily',
priority: 0.9,
},
...blogUrls,
];
}
// app/robots.ts
import { MetadataRoute } from 'next';
export default function robots(): MetadataRoute.Robots {
return {
rules: [
{
userAgent: '*',
allow: '/',
disallow: ['/api/', '/admin/', '/_next/'],
},
],
sitemap: 'https://your-site.com/sitemap.xml',
};
}
Next.js serves these at /sitemap.xml and /robots.txt automatically — no extra configuration needed.
The three sections above fix known problems. But what about the ones you don't know about yet — a page where someone accidentally removed the canonical, or a dynamic route where the OG image URL is malformed?
This is where automated auditing earns its keep. I wired @power-seo/analytics into my pre-deploy script to catch regressions before they hit production:
npm install @power-seo/analytics --save-dev
// scripts/seo-audit.mjs
import { auditPage } from '@power-seo/analytics';
const criticalPages = [
'https://staging.your-site.com/',
'https://staging.your-site.com/blog',
'https://staging.your-site.com/blog/your-latest-post',
'https://staging.your-site.com/pricing',
];
const results = await Promise.all(
criticalPages.map(url =>
auditPage(url, {
checkCanonical: true,
checkStructuredData: true,
checkOpenGraph: true,
})
)
);
const failures = results.filter(r => r.score < 85);
if (failures.length > 0) {
console.error('\n❌ SEO audit failed:\n');
failures.forEach(f => {
console.error(` ${f.url} — Score: ${f.score}/100`);
f.issues.forEach(issue => console.error(` ✗ ${issue.message}`));
});
process.exit(1);
}
console.log('All pages passed SEO audit');
Add it to your package.json:
{
"scripts": {
"build": "next build",
"seo:audit": "node scripts/seo-audit.mjs",
"predeploy": "npm run build && npm run seo:audit"
}
}
Now a broken canonical or missing OG tag fails the deploy before it ever reaches production. I've written up the full configuration options and CI integration at Next.js SEO if you want to go deeper.
<head> tags managed by useEffect or third-party libraries inside client components will not be in the initial HTML consistently.Why is SEO analysis for JavaScript websites different from traditional SEO? Is it the two-wave crawling, the hydration timing, the client-side routing — or something else that's bitten you specifically?
Drop your war story in the comments. I'm especially curious whether anyone has hit the wave-two delay in production and how long it actually took Google to render their JS-heavy pages.