Why I'm betting on AI-curated directories when Google AI Overviews answer the same queries

Why I'm betting on AI-curated directories when Google AI Overviews answer the same queries

# ai# indiehackers# webdev# showdev
Why I'm betting on AI-curated directories when Google AI Overviews answer the same queriesMORINAGA

The core counterargument to my three directory sites is Google's own AI Overviews. Here's the falsifiable 6-month bet I'm making and what would prove me wrong.

The obvious counterargument to everything I'm building is this: Google already does it. You type "best AI tools for video editing" into Google and an AI Overview surfaces a curated list, synthesized from the same kind of data I maintain, without requiring a click. My three directory sites — Top AI Tools, Find Games Like, and Open Alternative To — are competing with a feature baked into the world's dominant search engine.

I launched these sites on 2026-04-23, built on an architecture that runs at about $25/month. Traffic is essentially zero — the sites have been indexed for three weeks and organic crawling takes time. The question I keep returning to isn't whether Google will eventually index my pages. It's whether anyone will prefer clicking through to my site over reading the AI Overview box that already answered the same question.

Here's my honest, falsifiable position.

The bet, stated plainly

By October 2026 — six months post-launch — at least one of the three sites will show organic click trends in Google Search Console indicating real query traffic to specific comparison or filtered-browse pages. I define that as: at least 200 non-homepage organic clicks per month, sustained for two consecutive months, from queries I didn't directly drive through social or newsletter posts.

If that doesn't happen, I'll publish the Search Console screenshots and write a post explaining what I got wrong. I'm committing to that here.

The counterargument I take seriously

AI Overviews have gotten genuinely good at list-and-compare synthesis. If you search "open source alternative to Notion" today, Google often returns a four-item structured list with one-sentence descriptions directly in the Overview box. My Open Alternative To site covers that territory. The AI Overview absorbs the zero-click version of that query.

The optimistic response is: "my site appears as a citation source." The pessimistic response is: "Google consumes your signal and stops sending clicks." The pessimistic version has supporting evidence — industry-wide CTR on informational queries dropped measurably as AI Overviews expanded through 2025, and the trend hasn't reversed.

I don't think the pessimistic version is the whole story, but I'm not dismissing it. The most dangerous move is to assume the counterargument is wrong without designing around it.

Where AI Overviews have structural blind spots

AI Overviews are strong at synthesizing "what exists." They're weaker at three things I've deliberately built for.

Attribute-based filtering. If someone wants "open source Notion alternatives that work offline and have a mobile app," AI Overviews give hedged prose answers because they're synthesizing text, not querying structured fields. My Turso DB has works_offline, has_mobile_app, and last_commit_date as typed columns. Faceted filtering on those fields is something a browseable directory does better than a language model writing a paragraph about the general landscape.

Editorial negative-space. My game recommender includes "avoid if" caveats — structured fields that answer "who should skip this?" generated by a Claude Haiku prompt that specifically forces a critical answer. AI Overviews don't have a mechanism to surface structured negatives. They default to positive framing, which means someone with a specific disqualifying requirement gets an unhelpful answer.

Freshness on maintenance status. The ETL that populates the AI tools directory pulls GitHub commit activity weekly. A tool that hasn't been touched in 14 months is marked as low activity. AI Overviews don't distinguish between a tool actively maintained in 2026 and one that peaked in 2024 — they rely on the recency of web mentions, which can lag by months after a project goes dormant.

None of these defenses are permanent. Google could build structured attribute filtering into AI Overviews. But they require deliberate pipeline design, not just synthesis, and the gap exists now.

The downstream click thesis

Even if my sites lose the zero-click battle on broad discovery terms, there's a second query type I'm explicitly targeting: the downstream comparison query.

The sequence: someone types "Notion alternatives" into Google, gets an AI Overview naming four tools, then types "Appflowy vs Anytype performance" to compare the two they're considering. That second query is post-AI-Overview research. It has commercial intent. It wants a verdict, not another list.

For that query, a page with structured attribute comparison, a clear verdict, and fast load time competes directly with another AI-style answer — and structured data beats generative prose for "which one wins on attribute X." This is partly why I chose static SSG over dynamic AI rendering for these sites: a fast, indexable page with typed comparison fields is what a second-stage research click needs.

Query type AI Overview strength Directory strength
Discovery ("best tools for X") High — often answers directly Low for zero-click intent
Comparison ("X vs Y, which wins") Medium — hedges, rarely commits High — structured attrs + verdict
Filtered browse ("offline + mobile app") Low — prose, no filters High — faceted structured data
Freshness ("is X still maintained?") Inconsistent — lags commits High — weekly ETL refresh

The comparison and filtered-browse rows are the actual load-bearing columns of this bet.

Why the cost structure matters for intellectual honesty

At $25/month, I can run this experiment for a year without needing revenue to justify continuing. I'm not under pressure to interpret ambiguous signals optimistically.

Compare that to a project burning $200/month on infrastructure: you'd rationalize flat Search Console data as "still in the sandbox phase" past the point where the data actually says something. The full cost breakdown is genuinely minimal — Vercel Pro at $20, Turso starter at $0, Claude Haiku API in single-digit dollars for monthly ETL runs, GitHub Actions on free minutes.

I won't claim AdSense is approved or revenue is flowing until it is. Right now, AdSense rejected the *.vercel.app version of the sites. I've moved to custom domains and verified them in Search Console. I'm waiting for real crawl data before making any claims about what's working.

What would change my mind

Three outcomes would tell me the bet is wrong:

Impressions but near-zero clicks at 90 days. If Search Console shows my pages appearing as AI Overview citation sources but click rates stay near zero on comparison pages specifically, Google is extracting my signal without distributing traffic. That's the worst-case scenario — I'd need to rethink the format entirely.

AdSense keeps rejecting after genuine depth improvements. The original rejection was partly a *.vercel.app domain issue, but if Google's classifier still rates the pages as thin after I've rebuilt with real structured content and specific editorial attributes, my model of what "quality" means to the classifier is wrong.

Comparison queries migrate fully to LLM chat. If people stop typing "X vs Y" into Google and start asking ChatGPT directly, the downstream click I'm betting on disappears. I don't see evidence of this happening at scale for research involving specific attribute constraints — but I'm monitoring query volume patterns month-over-month.

The first outcome is the one I'd want to see early. Impressions with near-zero clicks on comparison pages by month 3 would tell me to pivot the format immediately rather than wait six months for a conclusion I could have reached sooner.


FAQ

Why three sites instead of one authority site?

Three narrow sites let me test three different intent types simultaneously. Games-like, AI tools, and OSS alternatives attract different queries and different audiences. One site would take longer to produce the same signal volume about which format works. The original architecture post covers the reasoning.

How does Claude Haiku generate the structured editorial fields?

Each ETL run sends entries through a shared Claude Haiku client that uses system-prompt caching to amortize the cost across batch runs. The prompts are tuned to force specific attribute outputs — avoid-if caveats, audience fit, freshness status — not open-ended descriptions.

What if one site works and two don't?

That's a useful outcome, not a failure. The format that works tells me something specific about the intent type. I'll invest in what works and document what didn't.

Where will you publish the October 2026 verdict?

On this blog, with raw Search Console screenshots. I'll publish regardless of whether the numbers are favorable.


Part of an ongoing 6-month experiment running three AI-curated directory sites. The technical claims here are real; this article was AI-assisted.