The asklokesh Ecosystem: How All My Projects Connect
A walkthrough of every public project I maintain and how they form a connected system for AI-powered software engineering
When I look at my GitHub profile today, I see something I did not plan explicitly but that emerged organically over the past year: a connected ecosystem of projects that, together, form a complete platform for AI-powered software engineering. Each project solves a specific problem. Together, they create something larger than any individual component.
I want to walk through the ecosystem, explain how the pieces connect, and share the thinking behind the architecture. This is partly for people who have discovered one project and want to understand the broader context, and partly for my own documentation of where things stand.
The Core: Loki Mode
Loki Mode is the orchestration engine at the center of everything. It is a multi-agent autonomous system that coordinates specialized AI agents through the RARV cycle: Reason, Act, Reflect, Verify. Every significant task flows through these four phases, handled by different agents with different responsibilities.
The system defines 41 agent types organized into 8 swarms, covering planning, implementation, review, testing, documentation, DevOps, debugging, and research. It is built on shell scripts, making it zero-dependency and portable to any environment that can run a Unix shell.
Loki Mode is provider-agnostic. It works with Claude Code, Codex CLI, and Gemini CLI through a configuration layer that abstracts the differences between providers. Switch a single environment variable and the same orchestration logic runs against a different AI backend.
This is the project that started everything. I built it because I was frustrated with single-agent approaches that lacked verification and quality gates. The problems it solved led directly to the projects that followed.
The Integration Layer: LokiMCPUniverse
If Loki Mode is the brain, LokiMCPUniverse is the nervous system. It is a collection of enterprise-grade MCP servers that give agents access to real external systems: version control, communication platforms, databases, CI/CD pipelines, monitoring, and more.
The Model Context Protocol is the standard that makes agent-to-service communication work. Each MCP server in the collection exposes a specific service's capabilities through a standardized interface. An agent that needs to create a GitHub pull request, query a database, or post a message does so through the same protocol, regardless of which underlying service is involved.
Building these servers taught me something important about agent infrastructure: the boring problems are the hard problems. Authentication handling, rate limiting, error recovery, connection pooling: these are not exciting features, but they are what separate a demo MCP server from one you can run in production.
The connection to Loki Mode is direct. When a Loki Mode agent needs to interact with an external system, it does so through an MCP server from this collection. The orchestration layer does not need to know the details of each service's API; it communicates through MCP, and the server handles the translation.
The Skills Layer: Skill Projects
I maintain a collection of Claude Code skill modules, each designed to extend the capabilities of Claude Code for specific workflows. These include:
loki-mode: The skill module that brings Loki Mode's agent orchestration directly into Claude Code sessions. Rather than running Loki Mode as a standalone system, this skill lets you invoke multi-agent workflows from within an active Claude Code session, blending autonomous agent capabilities with interactive development.
Other skill modules cover specific domains: code review patterns, infrastructure-as-code generation, documentation workflows, and testing strategies. Each skill is a focused capability extension that follows Claude Code's skill architecture.
The skills layer is where the ecosystem touches individual developer workflows most directly. You do not need to run the full Loki Mode system to benefit from the patterns it embodies. A single skill module can bring structured review, quality-gated testing, or provider-agnostic agent invocation to your existing Claude Code workflow.
The Knowledge Layer: Research Mode
Research Mode is the most recent addition to the ecosystem, and it fills a gap I encountered repeatedly: the need for systematic, structured research before making technical decisions.
When I am evaluating a new technology, assessing migration strategies, or analyzing architectural options, I need more than a quick web search. I need structured research that considers multiple dimensions: performance, security, maintainability, community health, enterprise readiness, and alignment with existing systems.
Research Mode applies the RARV cycle to research tasks. A planning agent decomposes the research question into specific inquiries. Research agents investigate each dimension independently. A synthesis agent combines findings into a coherent analysis. A verification agent checks claims against sources and flags unsupported assertions.
The output is a structured research report with confidence levels, source citations, and clear recommendations. It is not a replacement for human judgment, but it provides a foundation of organized information that makes judgment more effective.
Research Mode connects to the broader ecosystem through MCP. When researching a technology, agents can access GitHub to analyze repository health, read documentation through web MCP servers, and check dependency information through package registry integrations. The research is grounded in current, verifiable data rather than model knowledge alone.
The Platform: asklokesh.com
This site is the public face of the ecosystem. It is where I write about what I am building, what I am learning, and how I think about AI-powered software engineering. But it is also a demonstration of the tools I build.
The site itself was built using the development practices I advocate: AI-assisted development with human oversight, structured workflows, and quality verification. It is a Next.js application deployed on Vercel, nothing exotic, but the development process behind it reflects the ecosystem's principles.
Beyond the blog, asklokesh.com serves as a portfolio and documentation hub. Visitors can understand not just what I have built, but why, and how the pieces fit together. The site is the narrative layer that gives technical projects context and coherence.
How the Pieces Connect
The ecosystem is not a monolith. It is a set of independent projects that communicate through well-defined interfaces, primarily MCP and shell conventions.
Here is how a typical workflow flows through the system:
- A research question or engineering task enters through Claude Code or a direct Loki Mode invocation.
- Loki Mode's planning swarm decomposes the task and identifies which capabilities are needed.
- Implementation agents execute, using MCP servers to interact with external systems as needed.
- Review agents examine the work, using separate MCP connections to verify against source data.
- Verification agents run tests, check quality gates, and confirm that the work meets requirements.
- Results flow back through the system, and any learnings are captured for future reference.
Each step uses independent, replaceable components. You can swap MCP servers, change AI providers, modify quality gate thresholds, or add new agent types without rewriting the rest of the system. This modularity was a deliberate design choice, and it has proven its value every time a component needed to be updated or replaced.
What Ties It All Together
The common thread across every project in the ecosystem is a philosophy I have arrived at through experience: AI systems need structure to be reliable.
Unstructured agent interactions produce inconsistent results. Structured systems with clear phases, quality gates, and verification loops produce predictable, high-quality output. This principle applies whether you are orchestrating a multi-agent coding task, building an MCP server, developing a skill module, or conducting structured research.
Every project in the ecosystem embodies this philosophy. Loki Mode structures agent workflows. MCP servers structure agent-to-service communication. Skill modules structure capability extensions. Research Mode structures knowledge acquisition. The site itself structures how I communicate about all of it.
The Road Ahead
The ecosystem will continue to grow, but growth for its own sake is not the goal. Each new project will solve a specific problem that I encounter in my work. If no problem needs solving, no project gets built.
Some areas I am actively exploring include better cross-session memory for agents, more sophisticated quality metrics, and tighter integration between Research Mode and the implementation workflow. I am also interested in making the ecosystem more accessible to developers who want to adopt specific patterns without committing to the full system.
Everything remains open source. The entire ecosystem is available for anyone to use, modify, and build upon. That has been the policy from day one, and it is not changing.
Building in public means sharing not just the successes but the architecture, the decisions, and the thinking behind them. This post is part of that practice. If you have found any of these projects useful or have questions about how they connect, I am always interested in the conversation.