Ambient AI Is Changing Clinical Notes—But Is It Enough?

GRAND ROUNDS

Ambient AI Is Changing Clinical Notes—But Is It Enough?

My daughter’s pediatrician asked if it was okay to use ambient documentation during our visit. The AI would listen in, filter out the crying (from her, not me), and draft a clean note for the doctor to review and sign.

Turns out, ambient documentation is becoming more common in clinics and hospitals across the country. But is it actually saving physicians time—or just shifting the burden elsewhere?

In this article, I’ll quickly walk you through what generative AI is, break down a new Kaiser Permanente study on their rollout of ambient AI, and share why I think there are even better use cases for this technology.

What is Generative AI?

Generative AI is a rapidly evolving branch of artificial intelligence that’s making its way into clinical workflows.

At its core, generative AI refers to algorithms that can create new content—like text, images, or even structured data—by learning patterns from existing information. While traditional AI might help with categorizing or predicting, generative AI can generate a SOAP note, write an insurance appeal, or summarize a patient’s chart.

In healthcare, the applications are multiplying. Tools like DoximityGPT can draft referral letters, generate patient education handouts, and help with prior auths. Others are being built to summarize medical histories, respond to in-basket messages, or document entire encounters.

That last one—ambient documentation—is quickly gaining traction. It uses generative AI to passively listen to clinical conversations and produce visit notes in real time. The goal? Reduce pajama charting, cut down on documentation burden, and let physicians focus more on the patient, not the EHR.

But does it actually deliver on that promise?

“Ambient Artificial Intelligence Scribes: Learnings after 1 Year and over 2.5 Million Uses”

Kaiser Permanente published results on one of the largest real-world ambient AI deployments to date—over 7,200 physicians used the tool across more than 2.5 million patient encounters.

The big takeaway: ambient AI scribes reduced documentation time, particularly for heavy users. Physicians in primary care, mental health, and emergency medicine were most likely to use the tool—and they were also the ones benefiting the most.

The highest-volume users—those with the most documentation burden going in—saved an average of 0.7 minutes (42 seconds) per note. Low users saved only 0.15 minutes (9 seconds).

Across 2.5 million encounters, those small savings added up to 15,700 hours of documentation time saved—roughly 1,794 full workdays.

To play Devil’s Advocate: Is 0.7 minutes per note really that meaningful?

On an individual level, maybe not. If you’re only seeing 10–15 patients a day, you might barely feel the difference. And importantly, physicians still have to review and sign each note. That mental overhead doesn’t disappear (more in my Dashevsky’s Dissection).

So while the math works out at scale, the lived experience may not feel like a dramatic transformation—especially for clinicians with lighter documentation loads.

But the real question isn’t just whether it works. It’s whether it’s the best use of our AI resources.

Dashevsky’s Dissection

The promise of ambient AI is seductive: reduce admin burden, eliminate pajama charting, and give physicians more time with their patients (or their families). But as someone who’s spent a lot of time thinking and writing about the intersection of AI and clinical workflows, I’d argue there are more impactful—and more efficient—ways to use this technology.

Take AI’s role in summarizing data. This is an unequivocal win. Imagine a world where each patient’s chart includes an AI-generated summary of their medical history: who they are, recent hospitalizations, office visits, treatments, key lab results, and ER visits. Instead of clicking through 30+ progress notes, you get a concise, well-organized snapshot. Even if it saves just a few minutes per patient, that time adds up quickly—and minutes matter.

Another clear win: AI-powered medical search. Tools like OpenEvidence are becoming a “hospitalhold” name among residents. They deliver evidence-based answers in seconds, saving clinicians from scouring PubMed or UpToDate during high-pressure situations.

But when it comes to clinical documentation, AI’s impact is more complicated.

Yes, ambient scribes reduce typing. But they don’t eliminate the need for physician oversight. We still need to read, review, and correct what’s been generated—often with the same level of scrutiny we’d apply to a medical student’s note. If physicians are expected to proofread every line, are we actually saving time… or just shifting the cognitive load?

Even in Kaiser’s study, while 84% of physicians said ambient AI improved visit interactions and 82% said it improved job satisfaction, those benefits were concentrated among a specific group: the highest-volume users who already had heavy documentation burdens. For many others, the time saved per note was negligible—just 0.15 minutes. The ROI isn’t uniform, and we shouldn’t mistake “good enough for some” as “the right solution for all.”

There’s also the question of liability. If an AI-generated note includes a factual error that goes unnoticed, who’s responsible? And as these tools become more embedded in the EHR, we risk over-relying on them—blindly trusting that what was generated is correct. During a hectic clinic or inpatient shift, are you going to triple-check every AI note? Or just assume it got it right?

And the trust isn’t just on the clinician side—56% of patients in the Kaiser study said AI improved the quality of their visit, while none said it made the experience worse. That’s great PR for ambient AI. But it also raises a red flag: the more seamless and invisible AI becomes, the easier it is for everyone—clinicians, patients, health systems—to assume it’s flawless.

As I said in my previous newsletter:

And what happens when we assume AI is 100% correct? AI makes an ass out of you and me.

So yes, ambient AI scribes are helpful. But they’re far from the most valuable application of this technology. If we want to truly transform the physician experience and improve care, we should be doubling down on AI tools that reduce cognitive overload, enhance patient understanding, and optimize real-time decision-making—not just those that mimic our existing inefficiencies.

COMMUNITY POLL

Is ambient AI documentation truly reducing your workload?

Login or Subscribe to participate in polls.

INSIDE THE HUDDLE

Healthcare Huddle
Sunday Newsletter

Huddle+
Inefficiency Insights

Huddle+
Huddle #Trends

Healthcare Providers
Residency Reflections

Huddle+
Huddle University
Available for purchase without a Huddle+ membership.

Check out more exclusive coverage with a Huddle+ subscription.

Read personalized, high-quality content that helps healthcare providers lead in digital health, policy, and business. Become a Huddle+ member here.

INSIDE THE HUDDLE

Healthcare Huddle
Sunday Newsletter

Huddle+
Inefficiency Insights

Huddle+
Huddle #Trends

Healthcare Providers
Residency Reflections

Huddle+
Huddle University
Log in to the community HERE to view. 

Reply

or to participate.