I built an open-source AI gateway in Go — routes, rate-limits, and secures LLM traffic across providers

I built an open-source AI gateway in Go — routes, rate-limits, and secures LLM traffic across providersSaivedant Hava

Hey Devs, I just released AegisFlow, an open-source AI gateway written in Go. It sits between your...

Hey Devs,

I just released AegisFlow, an open-source AI gateway written in Go. It sits between your applications and LLM providers (OpenAI, Anthropic, Ollama, etc.) and handles routing, rate limiting, security policies, and observability.

What it does:

  • OpenAI-compatible API point any OpenAI SDK at it by changing base_url

  • Multi-provider routing with automatic fallback and circuit breaker

  • Policy engine that blocks prompt injection and detects PII before it reaches providers

  • Per-tenant rate limiting (sliding window, in-memory or Redis backed)

  • Usage tracking with token counts and cost estimation

  • Prometheus metrics + OpenTelemetry tracing

  • SSE streaming support

Why Go:

This is infrastructure that sits in the hot path of every AI request. Go gives me a single binary (~15MB), handles concurrent connections efficiently, and is what the cloud-native ecosystem expects for this kind of tool. Same reason Envoy alternatives, Traefik, and Kubernetes controllers are written in Go.

Tech details:

  • chi router for HTTP

  • Clean internal package boundaries (provider interface, middleware chain, policy engine)

  • 40 unit tests, all passing with -race

  • Works with local Ollama models no API keys needed

  • Docker + Docker Compose included

GitHub: https://github.com/saivedant169/AegisFlow

Would love feedback on the architecture and code quality. Issues are open for contributions several good first issue labels for anyone who wants to add a provider adapter.