Autonomi: A Framework for Autonomous AI Systems
Introducing Autonomi, the parent framework that unifies Loki Mode, LokiMCPUniverse, and the broader ecosystem of autonomous AI tools I have been building
For the past year, I have been building autonomous AI tools as individual projects. Loki Mode for multi-agent orchestration. LokiMCPUniverse for MCP server infrastructure. FireLater for incident management. Next Portal for internal developer platforms. K9s GUI for Kubernetes accessibility.
These projects share architectural principles, design patterns, and a common vision of how AI systems should work. But they have existed as separate repositories with separate identities. It is time to unify them under a parent framework that makes the connections explicit.
Autonomi is that framework.
Why a Parent Framework
The individual projects are useful on their own, but they are more useful together. Loki Mode's agents use MCP servers from LokiMCPUniverse to interact with external services. Next Portal exposes its capabilities through MCP, making them available to Loki Mode agents. FireLater's incident management integrates with Loki Mode's DevOps swarm for automated incident response.
Without a unifying framework, users discover these connections by accident. They adopt Loki Mode, find out it works better with LokiMCPUniverse, discover that Next Portal can be orchestrated through the same agent system, and piece together the architecture on their own.
Autonomi makes the architecture explicit. It defines the interfaces between projects, documents the integration patterns, and provides a coherent mental model for how the pieces fit together.
The Autonomi Architecture
Autonomi is organized around three layers:
Layer 1: Foundation (MCP Infrastructure)
LokiMCPUniverse sits at the foundation layer. It provides the MCP servers that give AI agents access to external services: GitHub, Slack, databases, cloud providers, monitoring systems, CI/CD pipelines.
This layer is about connectivity. An agent system is only as useful as the services it can interact with. The foundation layer provides over 25 enterprise-grade MCP servers that handle authentication, rate limiting, error handling, and response normalization.
Foundation Layer:
GitHub MCP Server
Slack MCP Server
PostgreSQL MCP Server
AWS MCP Server
Kubernetes MCP Server
Datadog MCP Server
... (25+ servers)
Layer 2: Orchestration (Agent Systems)
Loki Mode sits at the orchestration layer. It coordinates agents, manages the RARV cycle, enforces quality gates, and routes tasks to the appropriate swarms.
This layer is about structure. Raw AI capability without structure produces inconsistent results. The orchestration layer provides the discipline that makes autonomous execution reliable: planning before acting, reviewing before merging, testing before declaring success.
Orchestration Layer:
Planning Swarm (6 agents)
Implementation Swarm (7 agents)
Review Swarm (6 agents)
Testing Swarm (5 agents)
Documentation Swarm (4 agents)
DevOps Swarm (5 agents)
Debug Swarm (4 agents)
Research Swarm (4 agents)
Layer 3: Applications (Domain Solutions)
FireLater, Next Portal, K9s GUI, and future application-layer projects sit at the top. These are domain-specific solutions that leverage the orchestration and foundation layers to solve concrete problems.
This layer is about value delivery. The foundation and orchestration layers are infrastructure. The application layer is where that infrastructure translates into solutions that people use.
Application Layer:
FireLater (Incident Management)
Next Portal (Internal Developer Platform)
K9s GUI (Kubernetes Interface)
MediCompanion (Healthcare AI) [coming soon]
Design Principles
Autonomi codifies the design principles that all projects in the ecosystem share.
Open Source by Default
Every project in Autonomi is open source. This is not a marketing decision; it is a trust decision. Autonomous AI systems modify code, interact with production infrastructure, and make decisions without human approval at each step. Users need to be able to read every line of logic to trust these systems. Open source is the minimum viable trust level for autonomous AI.
Shell-Based Orchestration
The orchestration layer is built on shell scripts, not Python or TypeScript frameworks. Shell is the natural language for orchestrating command-line AI tools. It has zero dependencies, runs everywhere, and is transparent. You can read a shell script and know exactly what it does.
This is a deliberate counterpoint to the trend of building agent frameworks in high-level languages with complex dependency trees. When your agent framework requires pip install with 47 transitive dependencies, you have introduced 47 potential points of failure. When your agent framework is 2,000 lines of bash, the only dependency is bash.
Provider Agnostic
No project in Autonomi is locked to a specific AI provider. Loki Mode runs on Claude, Codex, or Gemini. MCP servers work with any MCP client. The applications integrate with AI through standard protocols, not provider-specific SDKs.
Provider lock-in is a strategic risk for any AI system. Models improve at different rates, pricing changes unpredictably, and availability is not guaranteed. Provider agnosticism is risk management.
Quality Gates Over Trust
Autonomi projects do not trust AI output by default. Every significant output passes through quality gates: automated tests, code review, security scans, coverage checks. Trust is earned through verification, not assumed.
This principle is based on practical experience. AI models produce impressive output most of the time, and subtly wrong output some of the time. Quality gates catch the subtle failures that human review might miss, especially at the scale and speed of autonomous execution.
The Integration Model
Projects within Autonomi integrate through well-defined interfaces, primarily MCP.
Next Portal exposes its workflows as MCP tools. Loki Mode agents can trigger deployments, check service health, and manage the service catalog through MCP calls. The Loki Mode agent does not need to know how Next Portal works internally; it only needs to know the MCP tool interface.
FireLater exposes its incident management as MCP tools. When Loki Mode's monitoring agent detects an issue, it can create a FireLater incident, assign responders, and log timeline entries, all through MCP.
K9s GUI's Kubernetes interactions are backed by the same Kubernetes MCP server that Loki Mode agents use. The same tool interface, different consumers.
This integration model is intentionally loose. Projects can be used independently. Next Portal works without Loki Mode. FireLater works without the MCP servers. But when used together, the integrations create capabilities that none of the individual projects provide alone.
The Autonomi Roadmap
The framework is functional today, but the vision extends further.
Unified CLI. A single command-line interface that spans all Autonomi projects. autonomi deploy triggers Next Portal workflows. autonomi incident manages FireLater incidents. autonomi agent invokes Loki Mode tasks. One CLI to access the entire ecosystem.
Shared configuration. Common configuration for authentication, provider selection, and environment settings across all projects. Configure your GitHub token once, and every project in the ecosystem uses it.
Cross-project workflows. Orchestrated workflows that span multiple projects. "Deploy the service, run the smoke tests, and if they fail, create an incident and assign the on-call engineer" as a single workflow that coordinates Next Portal, Loki Mode, and FireLater.
Community contributions. The framework is designed to accommodate third-party projects that share Autonomi's design principles. If someone builds a tool that is open source, provider agnostic, MCP-integrated, and quality-gate enforced, it can join the Autonomi ecosystem.
Why This Matters
The AI tooling landscape is fragmenting. Hundreds of agent frameworks, thousands of tools, millions of possible configurations. Users face choice overload, and the tools they adopt often do not work well together because they were not designed to.
Autonomi is an opinionated answer to this fragmentation. It says: here is a coherent set of tools, built on shared principles, designed to work together, that covers the core workflows of AI-assisted software engineering.
It does not try to be everything. It does not try to cover every use case. It provides a solid foundation for autonomous AI development and leaves room for users to extend it in directions that serve their specific needs.
The future of AI-assisted software engineering is not a single tool. It is an ecosystem of specialized tools that work together seamlessly. Autonomi is my attempt at building that ecosystem, one project at a time, with each piece reinforcing the others.
The code is open. The architecture is documented. The integrations work. If you are building autonomous AI systems, I invite you to look at what Autonomi provides and decide if it fits your needs.