Why I Left Epic to Demystify Medical Billing
Our mission: building AI to make American healthcare make sense again
My dad is an AI founder. My mom is a medical assistant. I grew up splitting the difference.
From my father, I learned how to think in systems: how tools scale, how incentives shape outcomes, how business really works behind closed doors, and the power of being a domain expert who can communicate abstractions to damn near everyone.
From my mother, I absorbed what it means to serve: to show up in an exam room, tired and overworked, and still advocate for someone who didn't speak English, who didn't know what "deductible" meant, who couldn't pay. Finding meaning in a thankless labor of love, day-in and day-out.
Even as an ordinary high schooler from the Bay, I was drawn to the intersection of technology and care delivery. I spent a summer in 2017 at the University of Arizona (Tucson) in a biomedical engineering lab, where I got to work on now-redundant sentiment analysis tools on de-identified EMR datasets and early digital scribes.
As a college student, bored at home during the pandemic, I led the creation of a zero-to-one ambulatory EMR / patient-intake portal for the Sonar Bangla Foundation, which scaled up to 22 nonprofit kidney and dialysis centers in the motherland. Here was technology serving the most vulnerable, not the most profitable. Simple, straightforward tools for care. It felt like the work I was born to do.
So when I graduated and landed my first real job on the generative AI team at Epic, one of the largest healthtech companies in the USA, I thought I'd found my calling. The mission sounded great: help doctors save time, streamline care, reduce burnout.
Am I Becoming Part of the Problem?
The deeper I got, the more uneasy I felt.
I realized we weren't just building productivity tools. We were encoding judgment. What got flagged as urgent, what got billed, who got followed up with… all high-stakes moral decisions masquerading as workflow enhancements. And too often, the goal was "efficiency," or “reducing our liability”, not equity nor vision nor effectiveness.
The contrast haunted me. Between my ethics classes at Cal and having read Weapons of Math Destruction by Cathy O'Neil, I was keenly aware of what AI could do when it refused to serve people who couldn't pay. It hit me that applying medical agents in a top-down way may permanently engrain the impersonal apathy that binds the system together. Hardening sociopathy into software. Optimizing the quiet cruelty of wealth. Exploiting the soft bigotry of low expectations. Offering empty platitudes about safety and values, and all without meaningful oversight.
"It is easier for a camel to go through the eye of a needle than for a rich man to enter the kingdom of God."
—Matthew 19:24
I'm not a Christian. But this verse lives in my head more than any alignment whitepaper. We've built a healthcare system that worships scale, automation, and margin. And now we're using AI to double down on it. Not because AI is evil, but because it reflects the system it's trained on. It's easier to automate billing than to question who's being billed and why. As Balzac wrote, "Every great fortune is built on a great crime."
That's why I left. I tried my hand at a venture-backed startup, Freed, but between a toxic work environment (thanks Andrey!) and having the underpinnings of Dorsal.fyi in my head, I knew I had to go forth and will my vision for a healthier, stronger, more financially secure nation on my own terms.
The Problem is Always People
Medical billing in America is designed to be incomprehensible. Every line item? A black box. Every denial couched in jargon. Every appeal a maze. Opacity is how a $4 trillion industry extracts maximum value from human suffering. Most folks have given up. Michael Siebel told me it was a problem he had given up on, and YC avoided making any investments in the space until his retirement.
I refuse to give in to despair. The same technology that can obscure can also reveal. The same systems that encode bias can be leveraged to expose it. I see a generational opportunity: towards radical candor, empowering patients, injecting liquidity into the market, and scaling healthcare access to new heights.
What We're Building
That's why I started Dorsal Health. While it’s a startup, it’s also a moral stance. A refusal to let AI be just another tool for quietly sorting people into "worth helping" and "worth denying."
We're building infrastructure that serves patients first: a novel model that aligns incentives across insurance companies and opaque hospital systems for the first time. This is no incremental, trench warfare play. We flip the script entirely. Our solutions ensure that:
Patients understand what they're really paying for, choose the right provider, manage their wellness and fitness benefits, and proactively screen for potential errors or fraud, particularly for members of impacted communities
Providers love that patients are more engaged, adhere better to their care plans, show up more consistently for preventive care and follow-ups, and most of all? Get taken care of before they get sicker
Care Plans lower spend, offer a unique benefit to their policyholders, reduce or eliminate traditional TPAs, and get the good PR they need in 2025
Instead of GPT-wrappers to automate the status quo, we're reimagining systems to make power legible, a rare win-win-win in a messy, adversarial space.
Why Now?
It's not perfect. It's a band-aid on a wound that policy hasn't closed. I was a kid when Obamacare passed, and am currently in awe at how little the "No Surprises Act" (2022) changed. Now we're staring down cuts to the welfare state so deep they could leave most rural and inner-city communities alike destitute. (The BBB was approved by the House an hour before I wrote this.) Whether or not you believe in broad entitlements or not, one thing is clear: without a plan B, people will die.
Dorsal Health may not be everyone’s savior. But it's a refusal to do nothing.
To me, AI is normal technology. I've been working in NLP for eight years, a third of my life, and change is the only constant. But how we use it, what we choose to optimize, reveals everything about our values. AI safety isn't just about rogue superintelligence. It's about what we build into the tools we use today. What kind of world we quietly accept, and what we dare to question.
I don't want to build systems that make the rich richer by making the sick sicker. I want to build tools that speak truth, make power legible, and keep the door open for a more just kind of care for all.
That means asking not just what technology can do, but what I and only I can do with the life I've been given, the things I've seen, and the values no one else can replicate.
— Abrar
If you want to make America a stronger, healthier, more financially secure nation, find me at abrar [at] dorsal [dot] fyi or follow our progress on Substack, LinkedIn, and our landing page.









Very inspiring to read. I wish you the best in your counter current approach to healthcare funding