MediCompanion: Building AI for Healthcare That Earns Trust
MediCompanion is an open source AI health companion designed for patient education and chronic condition management, with safety as the foundational constraint
Healthcare is the domain where AI could help the most people and where getting it wrong has the most serious consequences. I have been building MediCompanion, an open source AI health companion, and the engineering challenges are unlike anything I have encountered in other domains.
MediCompanion is not a diagnostic tool. It does not replace doctors. It is a patient education and chronic condition management companion that helps people understand their health conditions, manage their medications, and prepare for medical appointments. The distinction between "companion" and "advisor" is not semantic; it is the core design constraint that shapes every technical decision.
The Problem
Healthcare information is abundant but not accessible. A patient diagnosed with Type 2 diabetes can find thousands of pages of information online, but that information is often written for medical professionals, contradictory between sources, not personalized to their specific situation, or presented without context about what is urgent versus what is informational.
The result is that patients are simultaneously overwhelmed with information and under-informed about their specific condition. They do not know which symptoms to worry about, how their medications interact, what questions to ask their doctor, or how to interpret their lab results.
Healthcare providers are doing their best, but appointment times are short. Fifteen minutes is not enough time to explain a new diagnosis, prescribe medication, discuss lifestyle changes, and answer questions. Patients leave appointments with unasked questions and incomplete understanding.
MediCompanion fills the gap between appointments. It does not replace the doctor-patient relationship; it enhances it by helping patients arrive at appointments better informed and with better questions.
What MediCompanion Does
Health education. Given a diagnosis or condition, MediCompanion provides clear, evidence-based explanations at an appropriate reading level. It explains medical terminology, describes how conditions progress, and outlines common treatment approaches. All information is sourced from peer-reviewed literature and established medical guidelines.
Medication management. Patients can log their medications, and MediCompanion provides information about each medication's purpose, common side effects, timing considerations, and known interactions with other medications or foods. It does not prescribe medications or adjust dosages. It helps patients understand the medications their doctors have prescribed.
Appointment preparation. Before a medical appointment, MediCompanion helps patients organize their questions, summarize recent symptoms, and compile relevant information that their doctor needs. The output is a structured document that the patient can bring to their appointment.
Symptom journaling. Patients can log symptoms over time, and MediCompanion helps identify patterns. "Your headaches tend to occur on days when you report poor sleep" is the kind of pattern that is obvious in data but invisible in daily experience. This journaling is for patient awareness, not diagnosis.
Lifestyle guidance. For conditions managed partly through lifestyle (diet, exercise, sleep, stress management), MediCompanion provides evidence-based guidance tailored to the patient's specific conditions and limitations.
Safety Architecture
Safety in healthcare AI is not a feature. It is the architecture. Every technical decision in MediCompanion flows from the principle that the system must not cause harm.
Scope Boundaries
MediCompanion has hard-coded scope boundaries that cannot be overridden:
- Never diagnose. The system does not tell users what condition they have. It provides information about conditions they report having been diagnosed with.
- Never prescribe. The system does not recommend medications or dosage changes. It provides information about medications prescribed by their doctor.
- Never contradict providers. If a user reports that their doctor recommended something, MediCompanion does not second-guess the recommendation. It may provide additional context or suggest follow-up questions.
- Always defer to professionals. Every response includes appropriate caveats about consulting healthcare providers for medical decisions.
These boundaries are enforced at the prompt engineering layer, the output filtering layer, and the review layer. Three layers of defense against scope creep.
Citation Requirements
Every medical claim in MediCompanion's responses must be traceable to a source. The system draws from:
- Established clinical guidelines (AHA, ADA, CDC, WHO)
- Peer-reviewed medical literature
- FDA-approved medication information
- Evidence-based patient education resources
Unsourced claims are filtered out. This is more conservative than necessary for a patient education tool, but conservatism is the right default in healthcare.
Emergency Detection
MediCompanion monitors user input for signs of medical emergencies. Descriptions of chest pain, severe allergic reactions, suicidal ideation, or other emergency situations trigger an immediate response directing the user to call emergency services, with local emergency numbers displayed prominently.
This detection system errs heavily on the side of caution. A false positive (telling someone to call 911 when they do not need to) is vastly preferable to a false negative (providing educational content when someone is having a heart attack).
Audit Trail
Every interaction with MediCompanion is logged with full traceability. If a user reports that MediCompanion provided harmful information, the log provides the exact prompt, the model's response, the filtering and review steps applied, and the final output delivered to the user. This traceability is essential for identifying and fixing problems.
Technical Implementation
MediCompanion is built on a retrieval-augmented generation (RAG) architecture that grounds responses in verified medical content.
Knowledge Base. A curated database of medical information from authoritative sources. Content is reviewed and updated on a regular cycle. Each piece of content is tagged with its source, publication date, evidence level, and applicable conditions.
Retrieval Layer. When a user asks a question, the system retrieves relevant content from the knowledge base using semantic search. The retrieved content provides the factual basis for the response.
Generation Layer. The AI model synthesizes retrieved content into a clear, patient-friendly response. The model's role is translation and synthesis, not knowledge generation. It takes medical content written for professionals and presents it in accessible language.
Filtering Layer. The generated response passes through multiple filters:
- Scope filter: does the response stay within the companion's boundaries?
- Safety filter: does the response contain potentially harmful recommendations?
- Citation filter: are all medical claims supported by retrieved content?
- Tone filter: is the response appropriately empathetic and non-alarmist?
Review Layer. Responses flagged by any filter are queued for human review before delivery. This creates a human-in-the-loop for edge cases without slowing down routine interactions.
Why Open Source
Healthcare AI must be open source. This is not an ideological position; it is a safety requirement.
Patients deserve to know exactly how the system that provides health information works. Doctors who recommend the tool to patients need to verify its approach. Researchers who study its outputs need to audit its logic. Regulators who evaluate its safety need to inspect its code.
A closed-source healthcare AI asks patients to trust a black box with their health. That is an unreasonable request. Open source makes the system auditable, verifiable, and improvable by the broader medical and engineering community.
The medical community's involvement is particularly important. I am an engineer, not a physician. MediCompanion's medical content, safety protocols, and scope boundaries benefit from review by healthcare professionals. Open source enables that review.
The Regulatory Landscape
Healthcare AI exists in a regulatory context that varies by jurisdiction. In the US, the FDA has been developing frameworks for AI-based software in healthcare. MediCompanion's positioning as a patient education tool rather than a clinical decision support system places it in a specific regulatory category with specific requirements.
The key regulatory distinction is between tools that inform and tools that advise. MediCompanion informs. It provides information that helps patients understand their health. It does not advise on treatment decisions, diagnose conditions, or recommend clinical actions.
This distinction is maintained through the scope boundaries described above, and it is documented extensively for regulatory review.
Early Results
MediCompanion has been tested with small groups of patients managing chronic conditions: Type 2 diabetes, hypertension, and chronic pain. The feedback is encouraging.
Patients report feeling more informed and more confident in their medical appointments. The appointment preparation feature is consistently rated as the most valuable: patients arrive with organized questions and relevant symptom data, which makes the appointment time more productive for both the patient and the provider.
Healthcare providers who have reviewed MediCompanion's outputs report that the medical information is accurate, appropriately scoped, and consistent with clinical guidelines. Several have noted that the tool fills a gap they recognized but could not address within the constraints of appointment time.
The most important metric is the one I watch most closely: adverse events. To this point, there have been zero reported instances of MediCompanion providing harmful information or failing to flag an emergency situation. This is the metric that matters most, and maintaining it as the user base grows is the primary engineering priority.
Healthcare AI is a domain where moving fast and breaking things is not acceptable. MediCompanion moves carefully, verifies thoroughly, and defers to professionals for every decision that matters. That is the only responsible approach to building AI that interacts with people's health.