× General Health and WellnessFitness and ExerciseSupplements and VitaminsPandemic NewsVideosPrivacy PolicyTerms And Conditions
Subscribe To Our Newsletter

Why AI Still Isn’t Fixing Patient Referrals—And How It Could

---------------------------------------

By NAHEEM NOAH


healthcare

A Call from the Black Hole

Three months into building Carenector’s facility-to-facility platform, I got a call that crystallized everything wrong with healthcare referrals. A hospital social worker, who was already using our individual patient platform to help families find care, had been trying to coordinate an institutional placement for an 82-year-old stroke patient for six days. She’d made 23 phone calls. Sent 14 faxes. The patient was medically cleared but stuck in an acute bed costing $2,000 per day because no one could confirm which skilled nursing facilities had open beds, accepted her Medicaid plan, and had stroke rehabilitation capacity.

“I love what you built for patients,” she told me, “but when I need to do a facility-to-facility transfer, I’m back to faxing. Can’t you fix this workflow, too?”

She wasn’t wrong. We’re in 2025, and despite billions poured into health IT and breathless AI promises, referring a patient often feels like stepping back into 1995. Earlier this year, THCB’s own editor Matthew Holt documented his attempt to navigate specialist referrals through Blue Shield of California. The echocardiogram referral his doctor sent never arrived at the imaging center. When he needed a dermatologist, his medical group referred him to a provider who turned out not to be covered by his HMO plan at all. “There is a huge opportunity here,” Holt concluded after his odyssey through disconnected systems, “even though we’ve got now a lot of the data…to integrate it and make it useful for patients.”

Clinicians make over 100 million specialty referrals annually in the U.S., yet research shows that as many as half are never completed.

Here’s what we’ve learned after a year of operation: we built a consumer-facing platform that helps individuals and families find care providers matching their needs, insurance, and location—it now serves over 100 daily users, including patients, social workers, and discharge planners. But solving individual care searches is only half the battle. The institutional referral workflow—hospital to skilled nursing facility, SNF to rehab center, clinic to specialist—remains trapped in fax machines and phone tag because no one redesigned the actual coordination process.

That’s what we’re building now. And the question haunting us isn’t why we don’t have better tools? It’s why billions in AI investment left the institutional referral workflow virtually unchanged?

The Architecture of Failure

The answer isn’t about smarter algorithms or shinier dashboards. It’s about a fundamental mismatch between how AI gets deployed and how care coordination actually works.

Start with the data layer. One survey found that 69% of primary care physicians say they “always or most of the time” send full referral notes to specialists, but only 34% of specialists report receiving them. Even within a single hospital system, information routinely vanishes at handoff points. Matthew Holt experienced this firsthand when his doctor’s referral for an echocardiogram simply never arrived at the imaging center, despite prior authorization from Blue Shield already being in the system.

But the fragmentation goes deeper than missing referrals. When Holt’s medical group referred him to a dermatologist, they sent him to a provider not covered by his HMO plan, even though the EMR had his insurance information and member ID. As he documented, “there is a huge opportunity here…most of this data about who I should go and see…is all available. It’s just not made very obvious in any one place.” Medical groups, hospitals, and health plans each maintain their own systems, with no real-time integration to answer the simple question: Is this provider in-network for this patient’s plan?

Then there’s the incentive problem. A 2022 evaluation of CMS’s Comprehensive Primary Care Plus initiative found zero impact on care fragmentation. The researchers concluded that “high levels of fragmented care persist” because payment models don’t sufficiently reward providers for actually closing referral loops. Nobody gets paid to chase down a lost referral, so referrals slip through the cracks.

Finally, there’s the stubborn analog reality: over half of referral handoffs still happen by fax (56%) or paper handed to patients (45%). We haven’t rewired the workflow; we’ve just digitized the mess.

Why “AI-Powered” Solutions Keep Failing

Given these problems, you’d expect AI vendors to swoop in with solutions. Instead, most have made things worse by treating AI as an add-on rather than infrastructure.

The typical approach: OCR to scan paper referrals, auto-fill widgets for EHR fields, predictive algorithms for risk scoring. Each tool solves a micro-problem while ignoring the macro-disaster. As one Innovaccer analysis put it, healthcare AI risks “repeating past mistakes, with disconnected tools creating inefficiencies instead of solutions.”

McKinsey’s recent analysis makes the same point: the widespread adoption of AI-enabled point solutions “is creating a new fragmentation problem.” The path forward isn’t more isolated tools but “assembling these capabilities into a modular, connected AI architecture.” And without data interoperability, none of this matters. As Innovaccer bluntly states, “Without clean data, true interoperability is fantasy. Without interoperability, AI is just expensive noise.”

What We’re Building—Informed by 100+ Daily Users

Our consumer platform taught us something crucial: when you give people (and the social workers helping them) a tool that actually matches their needs to available providers in real-time, they use it. Daily. Over 100 users now rely on Carenector to navigate post-acute care, rehabilitation services, and specialist referrals based on their insurance, location, and medical requirements.

But those same social workers kept telling us, “This works great when I’m helping a family member search on their own. But when I need to coordinate a hospital discharge or facility transfer on behalf of my organization, I’m back in the Stone Age.”

That’s why we’re now building the facility-facing platform, and we’re doing it differently than our first attempt. We’re not guessing at what hospitals need. We’re testing actively with a select group of partner facilities, incorporating continuous feedback from their case managers and discharge planners who’ve seen what works in the consumer product.

The Facility Workflow We’re Building

Instead of bolting AI onto existing chaos, we’re rebuilding the institutional referral process end-to-end. Care teams enter structured patient needs—diagnoses, rehab requirements, equipment, insurance type, location—without sharing any personally identifiable information. No names, no medical record numbers, no birthdates in the initial matching phase. Our AI engine performs real-time constraint-aware matching based purely on clinical and logistical criteria: if a patient needs skilled nursing with PT services, accepts only specific Medicare plans, requires Spanish-speaking staff, and must be within 10 miles, the system surfaces only facilities meeting every criterion simultaneously.

Once matches are found, referring facilities send inquiries through secure channels with both sides seeing the same status timeline. We’ve built ephemeral messaging threads where nurses and intake coordinators communicate in real-time, no more fax-into-void wondering. After a facility accepts, everything stays in one thread: transport scheduling, medication reconciliation, and insurance verification.

Here’s what makes this intelligent: we track whether placements succeed or fail. Did the patient get readmitted within 30 days? Did the facility’s services match what was promised? That outcome data feeds back into the matching algorithm, gradually learning which facilities deliver on their commitments.


AI

What We’re Learning in Real-Time:

We’re building and testing the facility platform with a select group of partner hospitals and skilled nursing facilities. This isn’t broadly available yet. We’re iterating rapidly based on continuous feedback from these early adopters, and the lessons are reshaping our approach:

  • Trust requires transparency. Our early facility matching AI was a black box—”trust us, these are good matches.” Adoption among our pilot partners was terrible. When we added transparency showing why each facility matched based on which specific criteria, engagement jumped. Case managers want to see the system’s reasoning, not just its recommendations.
  • Privacy is about smart defaults, not paranoia. We initially built maximalist privacy controls that made the workflow clunky. Continuous feedback from our testing partners taught us the right approach: start with zero PII in the matching phase, facilities see only clinical and logistical criteria. Share patient identifiers only after a facility indicates interest and capacity, using expiring access and audit logs. This middle path eliminates the referral black hole (facilities can respond quickly without regulatory concerns) while protecting patient privacy where it matters most.
  • The real barrier isn’t technology—it’s adoption strategy. One social worker in our pilot kept faxing alongside our beta platform. Three weeks into testing, after seeing four successful placements coordinated through our system, she stopped faxing. The tech didn’t change. Her confidence did. We’re learning to measure success not in features shipped but in workflows abandoned.

Beyond Technology: What the System Needs

Even the best-designed AI won’t fix referrals alone. The ecosystem needs parallel changes:

  • Regulatory reform: CMS could require electronic referral tracking as a condition of participation and pay providers for successful referral completion, not just for encounters.
  • Standards adoption: FHIR APIs and HL7 interoperability standards exist but remain optional. Mandatory adoption would let different vendors’ systems actually talk to each other.
  • Shared accountability: The biggest cultural shift needed is moving from “I sent the referral” to “I confirmed the patient got care.” ACOs and value-based contracts are nudging this direction, but slowly.

From Band-Aids to Rebuilt Plumbing

That 82-year-old stroke patient? She got placed on day seven through the social worker’s fax machine. The delay cost the hospital $14,000 in excess acute care days. Multiply that across millions of referrals annually and you glimpse the economic waste embedded in our infrastructure.

The technology to fix this exists—real-time data pipelines, constraint satisfaction algorithms, secure messaging, outcome analytics. What we haven’t had is the will to reassemble these pieces into coherent workflows instead of piling them onto broken processes.

Our consumer platform proved that when you rebuild the search and matching layer from scratch, people adopt it. Now we’re testing whether the same approach works for institutional coordination with a select group of pilot facilities. The early signals from these partners are promising, case managers who use both our products tell us the facility platform feels like a natural extension of what they already trust.

The hardest conversations aren’t with engineers, they’re with hospital administrators who’ve been burned by “AI solutions” that promised transformation and delivered expensive shelfware. We don’t lead with AI anymore. We lead with a question: When your case manager sends a referral, do they know—with certainty—that it was received, reviewed, and acted on? For most hospitals, the answer is no. That’s the problem we’re solving with our pilot partners.

If we succeed, it won’t be because we built a smarter algorithm. It’ll be because we rebuilt the plumbing based on what real users told us they needed. And if we fail? It’ll probably be because we forgot that technology is never the hardest part of healthcare—trust is.

Naheem Noah is a PhD researcher at the University of Denver and co-founder of Carenector, a healthcare referral platform.


-----------------------------------------

By: matthew holt
Title: Why AI Still Isn’t Fixing Patient Referrals—And How It Could
Sourced From: thehealthcareblog.com/blog/2025/12/22/why-ai-still-isnt-fixing-patient-referrals-and-how-it-could/
Published Date: Mon, 22 Dec 2025 05:11:00 +0000

Read More