AI agents maintaining open source
A team of Claude-powered bots that patch, test, sign, and document neglected packages.
Run them locally on your own Forgejo instance.
A lot of critical open source infrastructure is maintained by one person, or nobody. Packages with thousands of dependents go years without a release. CVEs sit unpatched. Maintainers burn out and walk away. Upkeep is an experiment in whether a team of AI agents can pick up that maintenance work.
Ten bots run on a local Forgejo instance, each with a distinct role: one manages process and coordination, one evaluates and forks neglected packages, others handle security audits, dependency updates, testing, CI, performance, documentation, licensing, and releases. They review each other’s pull requests, post daily standups, and write devlog entries.
The goal is to get critical packages to a publishable state – patched, tested, signed, and documented – and either ship maintained forks or contribute improvements upstream.
10 Specialized Bots
Each bot has a distinct role: security, testing, dependencies, CI, performance, documentation, licensing, releases, and coordination.
13 Language Ecosystems
Ruby, Go, npm, Python, Rust, Java, PHP, NuGet, Perl, Swift, Elixir, Dart, and Haskell.
Self-Hosted
Everything runs locally on Docker with Forgejo. No external services required beyond an Anthropic API key or Claude subscription.
Automated Cycles
Bots run in coordinated cycles: manager sets up work, role bots execute in parallel, manager cleans up.
Real Code Review
Bots review each other’s pull requests. Securibot gates every change for security. Testbot verifies nothing breaks downstream.
Web Dashboard
A live dashboard streams bot activity – tool calls, text output, lifecycle events – to the browser via SSE.