
Yash Kumar SainiThis is a submission for the Notion MCP Challenge What I Built Every Monday standup,...
This is a submission for the Notion MCP Challenge
Every Monday standup, someone asks: "What did you work on last week?" And every Monday, I stare at my screen trying to remember. Did I merge that PR on Wednesday or Thursday? Was that refactor in the auth module or the pipeline? How many repos did I even touch?
I got tired of that blank moment. So I built DevNotion — a 3-agent pipeline powered by Mastra that harvests my entire week of GitHub activity, narrates it into a first-person blog post using Gemini, and publishes it to Notion (as a planner-style page with structured tables) and DEV.to (as a draft article). Every Sunday, automatically, via GitHub Actions.
No more Monday amnesia. The blog writes itself.
p-queue + p-retry for both Notion and DEV.to APIs| Step | Agent | What it does |
|---|---|---|
| Harvest | github-harvest-agent |
Fetches weekly GitHub data via GraphQL (deterministic) |
| Narrate | narrator-agent |
Writes a first-person blog post from the data |
| Publish | publisher-agent |
Creates Notion planner page + DEV.to draft via direct APIs |
The pipeline only uses an LLM where it genuinely adds value — narration. Harvest and publish are pure function calls. No token overhead, no hallucination risk, faster execution.
I could've built one mega-agent that does everything. But that's a recipe for:
Instead, each agent is a specialist. The workflow chains them together:
export const weeklyDispatchWorkflow = createWorkflow({
id: 'weekly-dispatch',
inputSchema: z.object({ weekStart: z.string() }),
outputSchema: PublishOutputSchema,
})
.then(harvestStep)
.then(narrateStep)
.then(publishStep)
.commit();
Three steps, chained with .then(), committed as a single workflow. Mastra handles the data handoff between steps automatically.
The harvest step calls GitHub's GraphQL API directly — no agent reasoning needed:
const harvestStep = createStep({
id: 'harvest-github',
inputSchema: z.object({ weekStart: z.string() }),
outputSchema: WeeklyDataSchema,
execute: async ({ inputData }) => {
const data = await fetchWeeklyContributions(inputData.weekStart);
return WeeklyDataSchema.parse(data);
},
});
One GraphQL query pulls commits, PRs, issues, reviews, discussions, language stats, and contribution streak for the week. The response gets validated through a Zod schema. No LLM in the loop — this is pure data fetching.
This is where the LLM earns its keep. The narrator agent takes raw JSON and writes a blog post that sounds like I wrote it myself. The system prompt has a full personality profile — it writes in first person, knows my tech stack (Python, Rust, TypeScript), references my OSS work, and matches one of four tone profiles.
But LLMs can be flaky. So the narrate step has a fallback chain:
const narrateStep = createStep({
id: 'narrate',
execute: async ({ inputData, mastra }) => {
const agent = mastra!.getAgent('narrator-agent');
let blog;
try {
const result = await agent.generate(prompt);
const parsed = parseFrontmatter(result.text);
if (parsed.success) {
blog = parsed.data.blog;
} else {
blog = buildFallbackNarration(inputData).blog;
}
} catch (err) {
blog = buildFallbackNarration(inputData).blog;
}
return { blog, weeklyData: inputData };
},
});
A blog post is always produced, even if Gemini is completely down.
The publish step is where things get interesting. It doesn't just dump text into Notion — it builds a planner-style page with structured tables:
const publishStep = createStep({
id: 'publish',
execute: async ({ inputData }) => {
const { blog, weeklyData } = inputData;
// 1. Create Notion page
const createResult = await createNotionPage(title);
// 2. Create DEV.to draft (so the link goes into the Notion planner)
const devtoResult = await createDevtoArticle({
title: blog.headline,
body_markdown: buildDevtoMarkdown(blog),
tags: blog.tags,
published: false,
});
// 3. Write planner markdown to Notion (includes DEV.to link)
const plannerMd = buildPlannerMarkdown(weeklyData, blog, links);
await writeNotionMarkdown(notionPageId, plannerMd);
},
});
The order matters: DEV.to draft gets created before writing the Notion page content, so the Notion planner can include a link to the DEV.to draft. Cross-platform linking, done right.
Each Notion page includes:
This is the part I'm most excited about. DevNotion uses the Notion MCP Server in two complementary ways:
@mastra/mcp
The publisher agent integrates with the official @notionhq/notion-mcp-server through Mastra's MCP client. This gives the agent access to the full Notion API surface via Model Context Protocol:
import { MCPClient } from '@mastra/mcp';
export const notionMcp = new MCPClient({
servers: {
notion: {
command: 'npx',
args: ['-y', '@notionhq/notion-mcp-server'],
env: {
OPENAPI_MCP_HEADERS: JSON.stringify({
Authorization: `Bearer ${env.NOTION_TOKEN}`,
'Notion-Version': '2022-06-28',
}),
},
},
},
timeout: 30000,
});
The MCP tools are loaded lazily with a graceful fallback — if the MCP server fails to start, the direct tools still work independently:
export async function getNotionMcpTools(): Promise<Record<string, any>> {
try {
return await notionMcp.listTools();
} catch (err) {
console.warn('MCP: Notion MCP server unavailable, using direct tools only');
return {};
}
}
The publisher agent merges both tool sets — MCP tools for the full Notion API surface, and direct tools for capabilities MCP doesn't cover:
// Direct tools (Markdown Content API + DEV.to — not available via MCP)
const directTools = {
createNotionPage: createNotionPageTool,
writeMarkdown: writeMarkdownTool,
searchNotion: searchNotionTool,
updateNotionPage: updateNotionPageTool,
};
// Merge: Notion MCP tools + direct tools
const mcpTools = await getNotionMcpTools();
const tools = { ...mcpTools, ...directTools };
This dual approach means the publisher agent gets the best of both worlds — MCP's broad API surface for interactive use in the Mastra playground, plus direct tools for the automated workflow.
This is the Notion feature that made the planner-style pages possible. Instead of constructing Notion blocks one by one (which is painful and rate-limit-heavy), I write the entire page as markdown in one API call:
const response = await fetch(
`https://api.notion.com/v1/pages/${pageId}/markdown`,
{
method: 'PATCH',
headers: {
Authorization: `Bearer ${env.NOTION_TOKEN}`,
'Content-Type': 'application/json',
'Notion-Version': '2026-03-11',
},
body: JSON.stringify({
type: 'replace_content',
replace_content: { new_str: markdown },
}),
},
);
One PATCH request replaces the entire page content with rich markdown — including tables, headings, blockquotes, links, code blocks, everything. This is what powers the planner-style layout with structured stats tables + the full blog post, all in a single API call.
Notion's API allows roughly 3 requests per second. Every Notion call (MCP and direct) goes through a shared rate limiter:
const queue = new PQueue({ concurrency: 1, interval: 334, intervalCap: 1 });
async function rateLimited<T>(fn: () => Promise<T>): Promise<T> {
return queue.add(() => pRetry(fn, { retries: 3 })) as Promise<T>;
}
p-queue throttles concurrency, p-retry handles transient failures. I learned this the hard way — without rate limiting, the Notion API will 429 you into oblivion when you're creating a page, writing markdown, and updating the icon in quick succession.
Notion (3 req/s), DEV.to (30 req/30s), GitHub GraphQL (5000 points/hr) — every API has its own throttle. I ended up with p-queue + p-retry wrappers around everything. The rate limiter code is almost identical across all three services, and honestly, it should probably be a shared utility. But three similar lines of code is better than a premature abstraction.
I originally used Gemini's native JSON schema for structured output (agent.generate(prompt, { structuredOutput: { schema } })). It worked, but added 20-40 seconds per call. Switching to plain text generation with YAML frontmatter parsing was 3-4x faster and just as reliable. The deterministic fallback catches the rare parsing failure.
I've been through three Gemini models on this project: gemini-2.5-flash-preview-04-17 (retired), gemini-2.5-flash (stable but slow for structured output), and now gemini-3-flash-preview (current). The lesson: always make the model configurable via env vars. Hardcoding model IDs is a recipe for broken deploys.
Mastra and my code both depend on Zod, but different versions. Having two Zod instances means z.string() from one isn't recognized by the other — schema validation just silently fails. The fix: a single line in package.json:
{
"pnpm": {
"overrides": {
"zod": "$zod"
}
}
}
Forces pnpm to deduplicate to one Zod version. Took me way too long to figure that out.
The harvest and publish steps started as full agent calls. But an LLM doesn't add anything when the task is "call this GraphQL endpoint and return the result." Switching to direct function calls made the pipeline faster, cheaper, and more predictable. Only use an LLM where you need creativity or reasoning — everywhere else, just write a function.
Built with Mastra, Gemini, Notion API, and a lot of coffee. If you've ever forgotten what you worked on last week, give DevNotion a try.