Healthcare Revenue Cycle Intelligence Series Post 2 of 5 April 03, 2026 |Bob Klein

Generic AI Won't Save Your Revenue Cycle. Here's What Payer-Aware Intelligence Actually Means.

Diagram contrasting generic AI tools versus payer-aware intelligence models trained on organization-specific claims data

Key takeaway: Generic AI tools trained on industry averages can't fix your specific revenue cycle problems. Org-specific models trained on your payer mix, denial patterns, and claims history -- and kept in sync with CMS rules -- are what separate pilot purgatory from real ROI.

McKinsey published a report in January 2026 titled "Agentic AI and the Race to a Touchless Revenue Cycle." The central claim -- that AI enablement could cut cost to collect by 30 to 60 percent -- is getting a lot of attention. Rightfully so. But buried in the same report is a sentence that deserves more attention than it gets: most health systems using AI in revenue cycle are deploying it in isolated point solutions -- not integrated, not learning across the full cycle, not specific to their organization.

McKinsey even names the trap: pilot purgatory. You launch a proof of concept, demonstrate some potential, and then nothing. The pilot doesn't fail, but it doesn't grow either. The ROI case never quite closes. The next budget cycle comes around and the initiative gets deprioritized. I've watched this happen. And I've watched it happen for a specific, predictable reason: the AI being deployed isn't learning the right things. It's generic. And in revenue cycle, generic is almost as bad as nothing.

Why generic AI fails in RCM

Generic AI-powered RCM tools are typically trained on industry-wide claims data, general payer policy documentation, and aggregate denial patterns across thousands of unrelated organizations. That data produces models that are broadly accurate and specifically useless -- because your revenue cycle problem isn't an industry problem. It's your problem, shaped by your payer mix, your patient population, your specific contracted plans, your providers' documentation habits, and your history of what gets denied and what gets paid.

Commercial and Medicare Advantage payers have spent years industrializing the denial process. Their systems apply frequently changing rules at the plan level -- and they do it knowing that most hospitals won't appeal the majority of what gets denied. For a payer, a denied claim that is never appealed is effectively free revenue. The rules that govern this are largely public. CMS publishes National and Local Coverage Determinations that define exactly what is covered, for which diagnoses, under which conditions. The appeal rights exist. What's missing isn't the legal framework -- it's the operational system to use it.

A generic AI tool doesn't know that your dominant commercial payer has a documented pattern of denying a particular procedure code when paired with a specific diagnosis -- a pattern that has cost your organization money every quarter for two years. It doesn't know that one of your Medicare Advantage plans updated its prior authorization requirements for post-acute placements four months ago. It doesn't know that your highest-volume physician group produces clean claims 82 percent of the time while another group is at 61 percent -- and that the delta is traceable to three specific documentation habits. Your organization knows all of this. Or rather, your data does -- scattered across your EHR, your clearinghouse, your denial reports, your remittance files. What's missing is the intelligence layer that connects it, learns from it, and keeps it in sync with CMS as coverage rules change.

Where RCM leaders are converging in 2025

61%

Health systems prioritizing AI for denials/appeals (up from 39% in 2023)

60%

Health systems prioritizing AI for prior authorization (up from 35% in 2023)

51%

Health systems prioritizing AI for CDI and documentation (up from 31% in 2023)

Notice what the 2025 data shows: denials and appeals (61%) and prior authorization (60%) have surged to the top -- both up dramatically from 2023. These aren't coincidentally the areas where upstream clinical decisions drive downstream financial outcomes. They're exactly where payer-specific intelligence -- not generic automation -- determines whether the investment pays off.

What payer-aware intelligence actually means

Trained on your data

Your 12 to 18 months of claims history, remittance files, denial codes, and appeal outcomes -- organized into a model that understands your specific payer behavior, not industry averages.

In sync with CMS

LCD and NCD updates, Medicare Advantage rule changes, prior auth policy shifts -- the model tracks them and adjusts what it flags and recommends, continuously.

Learns from every outcome

Every claim result from your clearinghouse is a data point. What got paid, what got denied, what overturned on appeal -- that feedback loop makes the model smarter over time.

How the intelligence layer is structured

The intelligence layer draws from three sources: CMS rules and updates (coverage determinations, NCD/LCD changes, Medicare Advantage rule sync), payer contracts (plan rules, prior auth requirements, plan-level behavior), and claims and remittance data (denial codes, payment outcomes, appeal results from 835 ERA files). These feed into an org-specific payer intelligence model that trains on your claims, stays in sync with CMS, and learns from every claim outcome.

That model then delivers three things at every point in the workflow: point-of-care alerts with payer-specific guidance before documentation is final, denial prevention that flags and fixes risk before claims are submitted, and appeal intelligence that surfaces CMS citations and identifies winnable appeals. Every clearinghouse outcome feeds back into the intelligence layer -- the model improves with every claim.

This is the difference between a tool and an institutional capability. A tool does what it was built to do until you stop paying for it. An institutional capability learns your organization, compounds over time, and can be transferred to your team rather than remaining vendor-dependent indefinitely. The Build-Operate-Transfer model we use at Digital Scientists is a direct response to decades of vendor lock-in: we build the intelligence capability, operate it with your team until the workflows and models are proven, and then transfer ownership. You end up with an asset, not a subscription.

Where to start to avoid pilot purgatory

McKinsey's advice on sequencing is sound: start with the back end of the revenue cycle, where the tasks are largely administrative and lower-risk to automate. Denials management, A/R follow-up, underpayment recovery, and claim follow-up triage are the right entry points -- not because they're the whole solution, but because they generate ROI quickly, build organizational confidence, and create the data foundation that makes the upstream work more powerful.

The back end is where you prove the model. The front end -- clinical documentation, prior authorization, coding at the point of care -- is where you capture the real money. You need both, but the sequencing matters.

In our experience, organizations that try to solve the upstream clinical workflow problem before they've built a clean picture of their downstream denial patterns almost always struggle with adoption. Providers won't change how they document based on abstractions. They'll change when you can show them, with their own data, that a specific documentation pattern is producing a specific denial rate with a specific payer -- and that changing it will reduce the rework their coders have to do and the revenue their organization is losing.

$60M-$120M

Potential savings for a health system with $6B in patient revenue (McKinsey)

$3M/yr

Target at community hospital scale ($200M net patient revenue, 1.5pt cost-to-collect improvement)

30-60%

Cost-to-collect reduction achievable with AI enablement (McKinsey)

That's the target. But you get there by stacking near-term wins, not by betting on a single enterprise platform deployment. The touchless revenue cycle McKinsey describes is real. Getting there requires an AI that knows your organization -- not one built for someone else's.

Start here

Want to know where your revenue is leaking before we build anything?

We start every engagement with a Revenue Integrity Audit -- 2 to 4 weeks, 12 to 18 months of claims data, and a prioritized list of denial drivers ranked by dollars, not count.