How AI could transform Americas healthcare system
Fox News senior medical analyst Dr. Marc Siegel takes a look at how artificial intelligence could vastly improve the healthcare system but points out human doctors are still needed in the equation on The Story.
NEWYou can now listen to Fox News articles!
Artificial intelligence is quickly reshaping healthcare. It now supports diagnostic imaging, clinical decision tools, patient messages and back office workflows. According to the World Economic Forum, 4.5 billion people still lack access to essential care, and the global health worker shortage could reach 11 million by 2030. AI could help close that gap.
However, as AI becomes more embedded in care, regulators are zeroing in on a simple question. Should patients be told when AI plays a role in their care?
In the United States, no single federal law requires broad AI disclosure in healthcare. Instead, a growing patchwork of state laws is filling that gap. Some states require clear disclosure. Others mandate transparency indirectly through limits on how AI can be used.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
STATE-LEVEL AI RULES SURVIVE — FOR NOW — AS SENATE SINKS MORATORIUM DESPITE WHITE HOUSE PRESSURE
AI now supports many healthcare decisions, from patient communications to coverage reviews, making transparency more important than ever for trust and accountability. (Kurt “CyberGuy” Knutsson)
Why AI disclosure matters for trust
Transparency is not a technical detail, it is a trust issue. Research across industries shows people expect to be informed when AI affects decisions that matter to them. In healthcare, that expectation is even stronger. An analysis published by CX Today found that when AI use is hidden, trust erodes quickly, even when outcomes are accurate.
Healthcare depends on trust. Patients follow treatment plans, share sensitive information and stay engaged when they believe care decisions are ethical and accountable.
How AI disclosure connects to HIPAA and informed consent
While HIPAA does not directly regulate artificial intelligence, its principles still apply. Covered entities must clearly explain how protected health information is used and safeguarded.
When AI systems analyze or generate clinical information using patient data, nondisclosure can undermine that goal. Patients may not fully understand how their information shapes care decisions.
Disclosure also supports informed consent. Patients have the right to understand material factors influencing diagnosis, treatment, or care communications. Just as clinicians disclose new procedures or medical devices, meaningful AI use should be explained, so patients can ask questions and stay involved in their care.
AI TOOLS COULD WEAKEN DOCTORS’ SKILLS IN DETECTING COLON CANCER, STUDY SUGGESTS

States are stepping in where federal rules fall short, creating new disclosure requirements when AI influences care access, claims, or treatment decisions. (Kurt “CyberGuy” Knutsson)
What does AI disclosure mean in healthcare?
AI disclosure means informing patients or members when artificial intelligence systems are used in healthcare-related decisions. This can include clinical messages, diagnostic support tools, utilization review, claims processing or coverage determinations. The goal is transparency, accountability and patient trust.
Healthcare activities most likely to trigger disclosure
According to analysis from Morgan Lewis, disclosure requirements most often apply when AI is used for:
- Patient-facing clinical communications
- Utilization review and utilization management
- Claims processing and coverage decisions
- Mental health or therapeutic interactions
These areas are considered high impact because they directly affect access to care and understanding of health information.
Risks of failing to disclose AI use
Healthcare organizations that fail to disclose AI use face real consequences. These include increased litigation risk, reputational damage and erosion of patient trust. Ethical concerns around autonomy and transparency can also trigger regulatory scrutiny.
MORE AMERICANS ARE TURNING TO AI FOR HEALTH ADVICE

Clear AI disclosure helps patients stay informed and involved, reinforcing that licensed healthcare professionals remain responsible for every medical decision. (Kurt “CyberGuy” Knutsson)
How states are shaping AI disclosure rules
States are taking different paths to regulate healthcare AI, but most are starting with one common goal: greater transparency when technology influences care.
California focuses on communication and coverage decisions
California has taken one of the most comprehensive approaches.
AB 3030 requires clinics and physician offices that use generative AI for patient communications to include a clear disclaimer. Patients must also be told how to reach a human healthcare professional.
SB 1120 applies to health plans and disability insurers. It requires safeguards when AI is used for utilization review. It also mandates disclosure and confirms that licensed professionals make medical necessity decisions.
Colorado regulates high-risk AI systems
Colorado’s SB24 205 targets AI systems considered high risk. These are tools that materially influence decisions like approval or denial of healthcare services.
Entities must implement safeguards against algorithmic discrimination and disclose AI use. While broader than clinical care alone, the law directly affects patient access decisions.
Utah emphasizes mental health and regulated services
Utah has layered disclosure rules that intersect with healthcare.
HB 452 requires mental health chatbots to clearly disclose AI use. SB 149 and SB 226 extend disclosure requirements to regulated occupations, including healthcare professionals.
This approach ensures transparency in therapeutic interactions and clinical services.
Other states that are expanding AI transparency
Several other states are moving in the same direction. Massachusetts, Rhode Island, Tennessee and New York are all considering or enforcing rules that require disclosure and human review when AI influences utilization review or claims outcomes. Even when clinical diagnosis is not covered, these laws push accountability where AI affects care access.
What this means for you
If you are a patient, expect more transparency. You may see disclosures in messages, coverage notices or digital interactions. If you work in healthcare, AI governance is no longer optional. Disclosure practices must align across clinical, administrative, and digital systems. Training staff and updating patient notices will matter as much as the technology itself. Trust will increasingly depend on how openly AI is introduced into care.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Kurt’s key takeaways
AI can improve efficiency, expand access, and support clinicians. Yet its value depends on trust. Disclosure does not slow innovation. It strengthens confidence in both the technology and the professionals who use it. As states continue to act, transparency will likely become the norm rather than the exception in healthcare AI.
If AI helps guide your care, would knowing when and how it is used change the way you trust your healthcare provider? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.