Michal KrausFor the past few months, I've been running a personal AI assistant on a $5 VPS. Not a chatbot — an...
For the past few months, I've been running a personal AI assistant on a $5 VPS. Not a chatbot — an actual assistant that manages my calendar, triages my email, controls my Spotify, sends me proactive reminders, and remembers my preferences over time.
Today I'm open-sourcing it. It's called Rook.
GitHub: github.com/barman1985/Rook
Every AI assistant I tried fell into one of two categories:
I wanted something in between. An AI that lives in Telegram (zero onboarding — no new app to install), actually executes tasks via tool use, and runs on a single VPS I already had lying around.
Rook has 5 layers with strict dependency direction — each layer only depends on the layer below it:
┌─────────────────────────────────┐
│ Transport layer │ Telegram, MCP, CLI
├─────────────────────────────────┤
│ Router / Orchestrator │ Intent → model → agentic loop
├─────────────────────────────────┤
│ Skill layer (pluggable) │ Calendar, Email, Spotify, ...
│ ┌──────┐ ┌──────┐ ┌───────-─┐ │
│ │built │ │built │ │community│ │ Drop a .py, done.
│ │ -in │ │ -in │ │ plugin │ │
│ └──────┘ └──────┘ └──────-──┘ │
├─────────────────────────────────┤
│ Event bus │ on "calendar.reminder"→notify
├─────────────────────────────────┤
│ Core services │ Config, DB, Memory, LLM client
└─────────────────────────────────┘
** Why this matters: **
calendar.reminder — it doesn't know or care who's listening. The notification service picks it up and sends a Telegram message..env directly. No module opens its own SQLite connection. Everything flows through Core.This is what I think makes Rook actually useful for others. Adding a new integration is one Python file:
# rook/skills/community/weather.py
from rook.skills.base import Skill, tool
class WeatherSkill(Skill):
name = "weather"
description = "Get weather forecasts"
@tool("get_weather", "Get current weather for a city")
def get_weather(self, city: str) -> str:
import httpx
return httpx.get(f"https://wttr.in/{city}?format=3").text
skill = WeatherSkill()
That's it. The @tool decorator registers it with the LLM. Type hints are auto-inferred into JSON schema. Drop the file in skills/community/, restart Rook, and the LLM can now call get_weather.
No core changes. No PR needed. No registration boilerplate.
Most AI assistants either forget everything between sessions or dump everything into a flat database. Rook's memory is inspired by the ACT-R cognitive architecture from psychology.
Every memory has an activation score based on:
When the LLM needs context, only the most activated memories get injected into the system prompt. Frequently used memories stay sharp. Unused ones naturally fade. Just like your brain.
Rook processes voice messages locally:
Both run on the VPS. No cloud API calls, no per-minute billing, completely private.
git clone https://github.com/barman1985/Rook.git
cd Rook && python -m venv venv && source venv/bin/activate
python -m rook.setup # interactive wizard guides you through everything
python -m rook.main
The setup wizard asks for your API keys step by step, auto-detects available integrations, and generates .env. Docker is also supported.
What you need:
If you've ever wanted an AI assistant that actually does things instead of just chatting, give Rook a try. Star the repo if it looks useful, and I'd love feedback on the architecture.
GitHub: github.com/barman1985/Rook
Support: Buy me a coffee
♜ Rook — your strategic advantage.