Turning Weekly GitHub Activity Into Blog Posts on Notion + DEV.to

Turning Weekly GitHub Activity Into Blog Posts on Notion + DEV.to

# devchallenge# notionchallenge# mcp# ai
Turning Weekly GitHub Activity Into Blog Posts on Notion + DEV.toYash Kumar Saini

This is a submission for the Notion MCP Challenge What I Built Every Monday standup,...

This is a submission for the Notion MCP Challenge

What I Built

Every Monday standup, someone asks: "What did you work on last week?" And every Monday, I stare at my screen trying to remember. Did I merge that PR on Wednesday or Thursday? Was that refactor in the auth module or the pipeline? How many repos did I even touch?

I got tired of that blank moment. So I built DevNotion — a 3-agent pipeline powered by Mastra that harvests my entire week of GitHub activity, narrates it into a first-person blog post using Gemini, and publishes it to Notion (as a planner-style page with structured tables) and DEV.to (as a draft article). Every Sunday, automatically, via GitHub Actions.

No more Monday amnesia. The blog writes itself.

What it actually does

  1. Harvests my GitHub activity via GraphQL — commits, PRs, issues, code reviews, discussions, language stats, contribution streak
  2. Narrates the raw data into a casual, first-person blog post using Gemini (with a deterministic fallback if the LLM is unavailable)
  3. Publishes to two platforms simultaneously:
    • Notion — a planner-style page with stats tables, repo breakdowns, PR/issue/review tables, language breakdown, and the full blog post
    • DEV.to — a draft article ready for review

Key features

  • 3 specialized agents — each does one thing well (harvest, narrate, publish)
  • LLM only where it adds value — harvest and publish are deterministic, zero token overhead
  • 4 blog tone profiles — casual (default), professional, technical, storytelling
  • Planner-style Notion pages — not just a wall of text, but structured tables with stats, repos, PRs, issues, reviews, and languages
  • Notion MCP integration — full Notion API surface via Model Context Protocol
  • Notion Markdown Content API — write rich markdown directly to pages (the real game changer)
  • DEV.to draft publishing — articles created as drafts, ready to review and publish
  • GitHub Actions CI — weekly cron (Sundays 08:00 UTC) + manual dispatch
  • Blog log in README — CI auto-commits a metrics table after each run
  • Fallback chain — always produces a blog post, even if Gemini is down
  • Rate limiting everywherep-queue + p-retry for both Notion and DEV.to APIs

Architecture

DevNotion pipeline Architecture

Step Agent What it does
Harvest github-harvest-agent Fetches weekly GitHub data via GraphQL (deterministic)
Narrate narrator-agent Writes a first-person blog post from the data
Publish publisher-agent Creates Notion planner page + DEV.to draft via direct APIs

The pipeline only uses an LLM where it genuinely adds value — narration. Harvest and publish are pure function calls. No token overhead, no hallucination risk, faster execution.

Architecture Deep-Dive

Why 3 agents, not 1?

I could've built one mega-agent that does everything. But that's a recipe for:

  • Burning tokens on deterministic work (fetching GitHub data doesn't need an LLM)
  • Hallucinating URLs and stats (the publish step should never make things up)
  • Debugging nightmares (which part of the monolith failed?)

Instead, each agent is a specialist. The workflow chains them together:

export const weeklyDispatchWorkflow = createWorkflow({
  id: 'weekly-dispatch',
  inputSchema: z.object({ weekStart: z.string() }),
  outputSchema: PublishOutputSchema,
})
  .then(harvestStep)
  .then(narrateStep)
  .then(publishStep)
  .commit();
Enter fullscreen mode Exit fullscreen mode

Three steps, chained with .then(), committed as a single workflow. Mastra handles the data handoff between steps automatically.

Harvest: deterministic data, zero LLM

The harvest step calls GitHub's GraphQL API directly — no agent reasoning needed:

const harvestStep = createStep({
  id: 'harvest-github',
  inputSchema: z.object({ weekStart: z.string() }),
  outputSchema: WeeklyDataSchema,
  execute: async ({ inputData }) => {
    const data = await fetchWeeklyContributions(inputData.weekStart);
    return WeeklyDataSchema.parse(data);
  },
});
Enter fullscreen mode Exit fullscreen mode

One GraphQL query pulls commits, PRs, issues, reviews, discussions, language stats, and contribution streak for the week. The response gets validated through a Zod schema. No LLM in the loop — this is pure data fetching.

Narrate: LLM with a fallback chain

This is where the LLM earns its keep. The narrator agent takes raw JSON and writes a blog post that sounds like I wrote it myself. The system prompt has a full personality profile — it writes in first person, knows my tech stack (Python, Rust, TypeScript), references my OSS work, and matches one of four tone profiles.

But LLMs can be flaky. So the narrate step has a fallback chain:

const narrateStep = createStep({
  id: 'narrate',
  execute: async ({ inputData, mastra }) => {
    const agent = mastra!.getAgent('narrator-agent');
    let blog;

    try {
      const result = await agent.generate(prompt);
      const parsed = parseFrontmatter(result.text);
      if (parsed.success) {
        blog = parsed.data.blog;
      } else {
        blog = buildFallbackNarration(inputData).blog;
      }
    } catch (err) {
      blog = buildFallbackNarration(inputData).blog;
    }

    return { blog, weeklyData: inputData };
  },
});
Enter fullscreen mode Exit fullscreen mode
  1. Gemini generates a markdown blog with YAML frontmatter
  2. If parsing fails → deterministic fallback builds a basic post from raw data

A blog post is always produced, even if Gemini is completely down.

Publish: Notion planner + DEV.to draft

The publish step is where things get interesting. It doesn't just dump text into Notion — it builds a planner-style page with structured tables:

const publishStep = createStep({
  id: 'publish',
  execute: async ({ inputData }) => {
    const { blog, weeklyData } = inputData;

    // 1. Create Notion page
    const createResult = await createNotionPage(title);

    // 2. Create DEV.to draft (so the link goes into the Notion planner)
    const devtoResult = await createDevtoArticle({
      title: blog.headline,
      body_markdown: buildDevtoMarkdown(blog),
      tags: blog.tags,
      published: false,
    });

    // 3. Write planner markdown to Notion (includes DEV.to link)
    const plannerMd = buildPlannerMarkdown(weeklyData, blog, links);
    await writeNotionMarkdown(notionPageId, plannerMd);
  },
});
Enter fullscreen mode Exit fullscreen mode

The order matters: DEV.to draft gets created before writing the Notion page content, so the Notion planner can include a link to the DEV.to draft. Cross-platform linking, done right.

Each Notion page includes:

  • Published Links table — Notion page URL + DEV.to draft edit link
  • Week at a Glance — commits, PRs, issues, reviews, lines added/removed, streak
  • Active Repositories — repo name, commits, language, line changes
  • Pull Requests / Issues / Reviews / Discussions — structured tables
  • Languages — top languages by commit count
  • Full blog post — the narrated content below a separator



DevNotion Project

Star DevNotion on GitHub

How I Used Notion MCP

This is the part I'm most excited about. DevNotion uses the Notion MCP Server in two complementary ways:

1. Notion MCP Server via @mastra/mcp

The publisher agent integrates with the official @notionhq/notion-mcp-server through Mastra's MCP client. This gives the agent access to the full Notion API surface via Model Context Protocol:

import { MCPClient } from '@mastra/mcp';

export const notionMcp = new MCPClient({
  servers: {
    notion: {
      command: 'npx',
      args: ['-y', '@notionhq/notion-mcp-server'],
      env: {
        OPENAPI_MCP_HEADERS: JSON.stringify({
          Authorization: `Bearer ${env.NOTION_TOKEN}`,
          'Notion-Version': '2022-06-28',
        }),
      },
    },
  },
  timeout: 30000,
});
Enter fullscreen mode Exit fullscreen mode

The MCP tools are loaded lazily with a graceful fallback — if the MCP server fails to start, the direct tools still work independently:

export async function getNotionMcpTools(): Promise<Record<string, any>> {
  try {
    return await notionMcp.listTools();
  } catch (err) {
    console.warn('MCP: Notion MCP server unavailable, using direct tools only');
    return {};
  }
}
Enter fullscreen mode Exit fullscreen mode

2. Direct tools + MCP tools merged

The publisher agent merges both tool sets — MCP tools for the full Notion API surface, and direct tools for capabilities MCP doesn't cover:

// Direct tools (Markdown Content API + DEV.to — not available via MCP)
const directTools = {
  createNotionPage: createNotionPageTool,
  writeMarkdown: writeMarkdownTool,
  searchNotion: searchNotionTool,
  updateNotionPage: updateNotionPageTool,
};

// Merge: Notion MCP tools + direct tools
const mcpTools = await getNotionMcpTools();
const tools = { ...mcpTools, ...directTools };
Enter fullscreen mode Exit fullscreen mode

This dual approach means the publisher agent gets the best of both worlds — MCP's broad API surface for interactive use in the Mastra playground, plus direct tools for the automated workflow.

3. The Markdown Content API (the game changer)

This is the Notion feature that made the planner-style pages possible. Instead of constructing Notion blocks one by one (which is painful and rate-limit-heavy), I write the entire page as markdown in one API call:

const response = await fetch(
  `https://api.notion.com/v1/pages/${pageId}/markdown`,
  {
    method: 'PATCH',
    headers: {
      Authorization: `Bearer ${env.NOTION_TOKEN}`,
      'Content-Type': 'application/json',
      'Notion-Version': '2026-03-11',
    },
    body: JSON.stringify({
      type: 'replace_content',
      replace_content: { new_str: markdown },
    }),
  },
);
Enter fullscreen mode Exit fullscreen mode

One PATCH request replaces the entire page content with rich markdown — including tables, headings, blockquotes, links, code blocks, everything. This is what powers the planner-style layout with structured stats tables + the full blog post, all in a single API call.

4. Rate limiting

Notion's API allows roughly 3 requests per second. Every Notion call (MCP and direct) goes through a shared rate limiter:

const queue = new PQueue({ concurrency: 1, interval: 334, intervalCap: 1 });

async function rateLimited<T>(fn: () => Promise<T>): Promise<T> {
  return queue.add(() => pRetry(fn, { retries: 3 })) as Promise<T>;
}
Enter fullscreen mode Exit fullscreen mode

p-queue throttles concurrency, p-retry handles transient failures. I learned this the hard way — without rate limiting, the Notion API will 429 you into oblivion when you're creating a page, writing markdown, and updating the icon in quick succession.

Lessons Learned

Rate limits are the real boss

Notion (3 req/s), DEV.to (30 req/30s), GitHub GraphQL (5000 points/hr) — every API has its own throttle. I ended up with p-queue + p-retry wrappers around everything. The rate limiter code is almost identical across all three services, and honestly, it should probably be a shared utility. But three similar lines of code is better than a premature abstraction.

Structured output is slower than you'd think

I originally used Gemini's native JSON schema for structured output (agent.generate(prompt, { structuredOutput: { schema } })). It worked, but added 20-40 seconds per call. Switching to plain text generation with YAML frontmatter parsing was 3-4x faster and just as reliable. The deterministic fallback catches the rare parsing failure.

Gemini model musical chairs

I've been through three Gemini models on this project: gemini-2.5-flash-preview-04-17 (retired), gemini-2.5-flash (stable but slow for structured output), and now gemini-3-flash-preview (current). The lesson: always make the model configurable via env vars. Hardcoding model IDs is a recipe for broken deploys.

The Zod conflict that broke everything

Mastra and my code both depend on Zod, but different versions. Having two Zod instances means z.string() from one isn't recognized by the other — schema validation just silently fails. The fix: a single line in package.json:

{
  "pnpm": {
    "overrides": {
      "zod": "$zod"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Forces pnpm to deduplicate to one Zod version. Took me way too long to figure that out.

Direct API calls beat agent reasoning for deterministic work

The harvest and publish steps started as full agent calls. But an LLM doesn't add anything when the task is "call this GraphQL endpoint and return the result." Switching to direct function calls made the pipeline faster, cheaper, and more predictable. Only use an LLM where you need creativity or reasoning — everywhere else, just write a function.


Built with Mastra, Gemini, Notion API, and a lot of coffee. If you've ever forgotten what you worked on last week, give DevNotion a try.