|8 min read

2025: The Year I Went Full Loki Mode

A year-end reflection on leaving the corporate world, building open source full-time, and what it means to bet your career on autonomous AI systems

2025 is ending, and it has been the most consequential year of my career. I left a stable position at a major entertainment company, went full-time on open source, built an ecosystem of autonomous AI tools, enrolled in MIT's AI/ML program, and watched the industry catch up to the ideas I have been building toward.

This is the honest retrospective. The wins, the fears, the lessons, and what comes next.

The Decision to Leave

Leaving a stable, well-paying corporate role to build open source full-time is not a rational financial decision by conventional metrics. The salary stops. The benefits stop. The 401k match stops. The career ladder that you have been climbing for years suddenly does not exist.

I made the decision because I could see a gap between what was possible with autonomous AI systems and what was being built. The corporate environment, with its quarterly planning cycles, risk-averse decision making, and organizational overhead, was not the right context for the speed and ambition that this moment requires.

The AI agent landscape is moving fast enough that waiting for corporate approval processes to green-light experimental projects means falling behind. I needed the freedom to build, iterate, release, and respond to the market at a pace that corporate structures do not support.

That said, I want to be honest: the financial risk is real. Open source does not come with a paycheck. The path from "respected open source project" to "sustainable income" is not guaranteed and not fast. I have runway, but runway is finite.

What I Built

The output of 2025 is concrete and substantial.

Loki Mode grew from a concept to a globally adopted multi-agent orchestration system. 41 agents, 8 swarms, the RARV cycle, quality gates, provider-agnostic design across Claude, Codex, and Gemini. Enterprise features shipped: audit logging, scope controls, cost management, RBAC.

LokiMCPUniverse expanded to over 25 enterprise-grade MCP servers covering developer tools, cloud infrastructure, communication platforms, and enterprise services. These servers are the connective tissue between AI agents and the real world.

Autonomi emerged as the parent framework that unifies the ecosystem: Loki Mode for orchestration, LokiMCPUniverse for connectivity, and application-layer projects for domain-specific solutions.

FireLater provided an open source alternative to expensive ITSM platforms. Next Portal addressed the internal developer platform gap. K9s GUI made Kubernetes accessible beyond the terminal. MediCompanion brought careful AI assistance to healthcare.

Multi-cloud MCP created a unified interface for managing resources across AWS, GCP, and Azure through one protocol.

Looking at that list, I am proud of the breadth and the quality. Each project solves a real problem, is open source, and is designed to work with the others.

What I Learned About Open Source

Building open source full-time taught me things that part-time open source maintenance never did.

Adoption is nonlinear. Projects can sit dormant for weeks and then spike when someone influential shares them. The viral moments are unpredictable, and trying to engineer them is less effective than consistently building quality software.

Issues are a signal of adoption. When people file issues, they are using your software seriously enough to find and report problems. An empty issue tracker does not mean your software is perfect; it means nobody is using it. The projects with the most issues are the ones with the most engaged users.

Documentation is product. I used to treat documentation as a tax on engineering work. Now I understand that for open source projects, documentation is the product. Users cannot adopt what they cannot understand, and the time invested in clear documentation has a higher return than almost any feature work.

Community building is a full-time job. Responding to issues, reviewing pull requests, writing documentation, engaging on social media, giving talks: community building takes as much time as coding, and it is equally important for a project's success.

The sustainability question is real. Open source creators face a genuine tension between building the best possible software and building a sustainable income. These goals are not always aligned. The features that make software better for users are not always the features that generate revenue. I have not solved this tension, but I am thinking about it actively.

What I Learned About AI Agents

A full year of building autonomous AI systems has crystallized several beliefs.

Structure enables autonomy. This is the core thesis of Loki Mode, and it has been validated repeatedly. Agents without structure produce inconsistent results. Agents with structure (the RARV cycle, quality gates, defined roles) produce reliable results. More structure means more autonomy, not less.

Verification is the hard problem. Generating code with AI is solved. Verifying that the code is correct, secure, performant, and maintainable is the actual engineering challenge. The teams and systems that invest in verification produce dramatically better outcomes.

Provider agnosticism is strategic necessity. The AI landscape changes too fast to bet on a single provider. Models improve at different rates, pricing shifts, new capabilities emerge from unexpected sources. Building systems that work across providers is risk management.

Safety is an engineering discipline. AI safety is not a theoretical concern; it is a practical engineering discipline with concrete techniques: scope controls, audit logging, quality gates, human-in-the-loop breakpoints, rollback capabilities. The builders who treat safety as engineering rather than philosophy build systems that earn trust.

Open source is the only viable trust model for autonomous systems. If a system is going to modify your code, create pull requests, and interact with your infrastructure autonomously, you need to be able to read every line of its logic. Closed-source autonomous systems ask for a level of trust that is not reasonable.

What I Learned About Myself

A year of independent work reveals things about yourself that corporate employment obscures.

I learned that I am more productive without the overhead of corporate process. Meetings, status updates, sprint planning, quarterly reviews: these activities consumed a significant portion of my corporate working hours. Without them, I ship more.

I learned that I need external structure even when I do not have organizational structure. Working alone, it is easy to chase interesting problems at the expense of important ones. I have developed personal systems for prioritization and time management that replace the structure a team provides.

I learned that the creative freedom of independent work is intoxicating and dangerous. When you can build anything, deciding what to build becomes the hardest decision. Focus is a discipline, and I have gotten better at it over the year, but it remains a challenge.

I learned that I miss collaboration more than I expected. Building alone is efficient but lonely. The hallway conversations, the whiteboard sessions, the code reviews from colleagues who understand the context: these are valuable, and their absence is felt. The open source community provides some of this, but it is not the same as a co-located team.

The Financial Reality

I want to address the financial reality because it is relevant to anyone considering a similar path.

I had savings that provide runway. That runway is not infinite. Open source visibility has created consulting opportunities, speaking invitations, and partnership discussions. Some of these have generated income. None have replaced a full-time corporate salary.

The path to sustainability for open source creators typically involves some combination of consulting, sponsorships, enterprise licenses, and speaking. I am exploring all of these without committing fully to any single approach. The goal is to find a model that sustains the open source work without compromising the quality or openness that makes the work valuable.

I am not worried, but I am realistic. 2026 needs to include a sustainable income model alongside continued open source development.

What Comes Next

2026 is about three things: deepening the technology, building the community, and finding sustainability.

Technology. Mixed-provider workflows in Loki Mode, where different providers handle different phases of the RARV cycle based on their strengths. Agent interoperability through A2A, enabling Loki Mode agents to work with agents from other frameworks. Continued expansion of the MCP server ecosystem.

Community. Growing the contributor base for Autonomi projects. Building partnerships with organizations that share the vision of open, structured, safe autonomous AI systems. More writing, more speaking, more engagement with the broader AI engineering community.

Sustainability. Developing an income model that sustains full-time open source work. This is the pragmatic challenge that enables everything else.

2025 was the year I went full loki-mode. I bet my career on the conviction that autonomous AI systems need structure, quality, and openness to reach their potential. The bet has not fully paid off yet, but the trajectory is right, and the work is the best I have ever done.

Here is to 2026. There is more to build.

Share: