Telemedicine Talks

#43 - 2026 Compliance Forecast: Surviving the New AI Rules


Listen Later

This episode is sponsored by Lightstone DIRECT. Lightstone DIRECT invites you to partner with a $12B AUM real estate institution as you grow your portfolio. Access the same single-asset multifamily and industrial deals Lightstone pursues with its own capital – Lightstone co-invests a minimum of 20% in each deal alongside individual investors like you. You’re an institution. Time to invest like one.

________________________

Can you trust an AI that’s writing your treatment plans, or will 2026 be the year clinicians start paying the price for automation?

 In this 2026 compliance predictions episode, Phoebe Gutierrez shares her “love–hate relationship” with AI: it streamlines operations and boosts efficiency, but it cannot be treated as a source of truth. As more practices embed AI into core clinical workflows, the question becomes unavoidable: Who is responsible when AI is wrong? The clinician? The platform? The vendor?

Phoebe explores how regulators are now answering that question. She explains how the FDA, the ONC, and state legislatures are rapidly rolling out rules governing AI-enabled software, clinical decision support, bias testing, audit trails, human oversight, and patient disclosure. With over 250 AI-related bills introduced in 34 states, the landscape is shifting faster than most companies can keep up. She walks through the most common—and dangerous—mistakes she sees digital health companies making, including auto-populating treatment plans without clinician review, failing to track AI overrides, not disclosing AI use in patient encounters, ignoring bias testing, and misunderstanding liability responsibilities between platforms and vendors.

AI isn’t going anywhere—but the way we use it must evolve. This episode gives you the roadmap.

Three Actionable Takeaways:

  •  Keep Humans in the Loop—Always: AI can support clinical workflows, but clinicians must verify recommendations, diagnoses, dosing suggestions, and triage outputs. Automation without oversight is now a regulatory red flag.
  • Track Every AI Decision and Override: If you can’t show who reviewed the AI output, whether it was modified, and why, you cannot prove safe use or compliance—especially during audits or investigations.
  • Build an Internal AI Governance System: Assign ownership, maintain an AI tool inventory, evaluate bias, ensure state-by-state compliance, review vendor claims, and require transparent validation data before deployment.

About the Show

Telemedicine Talks explores the evolving world of digital health, helping physicians navigate new opportunities, regulatory challenges, and career transitions in telemedicine.

About the Host:

  • Phoebe Gutierrez – Former state regulator turned telehealth executive, specializing in compliance and sustainable virtual care models.
    Connect with Phoebe Gutierrez:
    https://www.linkedIn.com/in/pkgutierrez/ 
    [email protected] (mailto:[email protected])

The information provided in Telemedicine Talks is for educational and informational purposes only and should not be construed as medical, legal, or financial advice. While we discuss best practices, industry trends, and real-world experiences, every situation is unique. Listeners should consult with qualified professionals before making decisions related to telemedicine practice, compliance, contracts, or business operations. The views expressed by the hosts and guests are their own and do not necessarily reflect those of any organizations they may be affiliated with.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

...more
View all episodesView all episodes
Download on the App Store

Telemedicine TalksBy Phoebe Gutierrez, Dr. Leo Damasco, Doctor Podcast Network