The Research Confirms It: Patients Are Confused About AI in Healthcare
The Research Behind What Patients Actually Want from AI in Healthcare
I've spent countless hours as a patient navigating my own chronic condition, and I've watched the subtle integration of AI into clinical care. Sometimes a physician mentions it directly. Often, I only discover it later—buried in a clinical note or glimpsed on a screen. This opacity troubles me, not as someone fearful of technology, but as someone who works in AI governance and knows what happens when powerful systems operate without transparency or accountability.
As both a patient dependent on these systems and as a Responsible AI Healthcare Strategist who helps organizations implement governance frameworks, I've come to understand a fundamental truth: patients aren't rejecting AI in healthcare. They are rejecting the secrecy and lack of accountability that too often surround it.
The Evidence Is Clear: Patients Want Clarity, Not Perfection
Recent research reveals a consistent pattern that challenges industry assumptions about patient adoption of AI. A comprehensive 2025 study in the Journal of Participatory Medicine surveyed over 4,000 patients across diverse healthcare settings about their attitudes toward AI in clinical care. The findings are instructive: 72% of respondents expressed comfort with AI-assisted diagnosis and treatment planning—but only when two conditions were met: transparent disclosure of AI's involvement and clear clinician accountability for final decisions.
This validates what I've witnessed firsthand. Patients aren't technophobic. We're governance-conscious.
Research published in JMIR Formative Research in 2025 examining patient trust in AI clinical systems found that disclosure timing matters profoundly. When clinicians explained AI's role at the beginning of an encounter, patient trust scores averaged 7.8 out of 10. When patients discovered AI involvement after the fact, trust scores dropped to 4.2. This isn't just a satisfaction metric—it's a governance failure with measurable consequences for the therapeutic relationship.
The Pew Research Center's September 2025 survey of American attitudes toward AI in medicine found that 68% of respondents prioritized "understanding how AI reaches conclusions" over "AI being more accurate than human doctors." This preference hierarchy reveals something crucial: patients value transparency and accountability over pure technical performance.
The Governance Gap: Transparency as Regulatory Imperative, Not Marketing Message
Here's where my dual perspective as patient and AI governance professional becomes critical. The transparency patients demand isn't a nice-to-have—it's a regulatory and ethical imperative that healthcare organizations consistently fail to meet.
The FDA's 2024 guidance on AI/ML-enabled medical devices explicitly requires manufacturers to provide "transparent information about the device's inputs, outputs, and operational characteristics" to support informed decision-making. The WHO's Ethics and Governance of Artificial Intelligence for Health guidance similarly emphasizes that "transparency should not be sacrificed for the sake of accuracy or efficacy." The Belmont Report's foundational principle of respect for persons demands that patients receive information necessary for autonomous decision-making, which necessarily includes disclosure of AI involvement in their care.
Yet research from Stanford University's Center for Biomedical Ethics found that only 23% of healthcare organizations have formal policies requiring clinicians to disclose AI system use to patients. This gap between regulatory expectation and actual practice represents a systematic governance failure.
What does this look like in practice? A 2024 study in Nature Digital Medicine examined 156 clinical AI deployments across 47 health systems and found that 81% lacked patient-facing documentation explaining the AI's role, limitations, or recourse mechanisms. This isn't just poor communication—it's a violation of the principle of informed consent.
What Transparency Actually Looks Like: Moving Beyond Compliance Theater
When I suggest that healthcare organizations need transparency about AI systems, I'm not advocating for overwhelming patients with technical documentation. I'm demanding something far simpler and more profound: clear, accessible explanations integrated into clinical workflows.
Consider the difference between these approaches:
Compliance Theater (What Too Often Happens): A 40-page consent form buried in patient portal documents, filled with technical jargon about "machine learning algorithms utilizing multivariate regression analysis of electronic health record data," signed electronically during registration when patients are focused on their medical concerns.
Actual Transparency (What Governance Requires): "I'm using a tool called [System Name] to help identify early patterns in your lab results that might indicate a change in your condition. This tool analyzes trends across thousands of similar patients, but I review every recommendation before it influences your care. If you'd like to know more about how it works or opt out of its use, we can discuss that now."
That's 70 words. It takes 30 seconds to deliver. Yet research from UC Berkeley's Center for Health Technology found that clinicians who used this kind of brief, early disclosure experienced a 34% increase in patient trust scores and a 28% reduction in AI-related concerns compared to clinicians who provided disclosure only when asked.
The evidence shows that transparency doesn't slow clinical workflows—it strengthens them. A 2025 systematic review in The Lancet Digital Health examining 89 studies of AI disclosure practices found that proactive, plain-language explanations increased patient acceptance of AI-assisted care by 43% while decreasing anxiety about algorithmic involvement by 37%.
Human Oversight: The Non-Negotiable Requirement
The research reveals something that should humble every healthcare AI developer: patients consistently value human oversight more than algorithmic accuracy. A 2024 survey published in Health Affairs found that 83% of patients would choose a physician using AI with demonstrated oversight protocols over a fully autonomous AI system with 5% better diagnostic accuracy.
This isn't irrational. It's sophisticated governance thinking from people who understand that medicine involves more than pattern matching—it requires judgment, empathy, and accountability that only humans can provide.
The FDA's Good Machine Learning Practice for Medical Device Development explicitly requires "human oversight at key decision points" for high-risk AI applications. The European Medicines Agency's 2024 reflection paper on AI in medicine similarly emphasizes that "ultimate responsibility for treatment decisions must remain with qualified healthcare professionals."
Yet a troubling pattern emerges in deployment. Research from Johns Hopkins examining clinical AI implementation found that 67% of deployed systems lacked clear documentation of where human oversight occurred in the decision pathway. When human review exists, it's often perfunctory—a checkbox rather than a meaningful governance control.
This matters profoundly for patients like me. When my endocrinologist uses an AI system to analyze my continuous glucose monitor data and recommend insulin dosing adjustments, I need to know: Is my doctor reviewing the algorithm's reasoning? Can they override it when my specific circumstances don't match the model's assumptions? What happens when the AI fails to recognize a pattern my doctor's clinical experience would catch?
These aren't academic questions. They're governance questions with life-or-death implications.
The Equity Dimension: Trust Is Uneven by Design
My experience as a patient at Mayo Clinic represents a position of privilege, access to world-class care, providers with time to explain, and the educational background to ask informed questions about AI systems. For millions of patients, particularly those from communities historically harmed by medical systems, the transparency and oversight I can demand isn't available.
Research published in JAMA Network Open in 2024 examined patient trust in healthcare AI across racial and socioeconomic lines. The findings are stark: Black patients expressed 37% lower baseline trust in AI-assisted care compared to white patients. Hispanic patients showed 29% lower trust. Patients in underserved communities expressed 42% lower trust.
These aren't arbitrary prejudices. They're rational responses to historical patterns of discrimination, exploitation, and harm. The Tuskegee syphilis study. Forced sterilizations. Unequal pain management. Algorithmic bias in kidney disease staging that disadvantaged Black patients. When I demand transparency and accountability in healthcare AI, I'm not just advocating for myself—I'm demanding governance structures that might begin to repair this broken trust.
The WHO's guidance on AI ethics explicitly requires "continuous assessment and mitigation of algorithmic bias, with particular attention to vulnerable and marginalized populations." Yet a 2025 review in Nature Medicine found that only 31% of clinical AI systems undergo regular bias audits, and just 18% include community representatives in governance oversight.
This systematic neglect isn't accidental. It reflects economic incentives that prioritize rapid deployment over rigorous equity assurance. A survey by the American Medical Association found that 76% of healthcare organizations cite "time to market" as the primary constraint on AI governance investments—including equity audits, bias testing, and community engagement.
The cost of this failure isn't abstract. A 2024 Stanford study analyzing five years of clinical AI deployments found that biased algorithms contributed to care disparities affecting an estimated 2.3 million patients annually, with disproportionate harm to Black, Hispanic, and low-income populations.
What Healthcare Organizations Must Do: Specific Governance Actions Required
The research and regulatory frameworks are clear. The question isn't whether healthcare organizations should implement transparent, accountable AI governance—it's whether they have the will to do so. Based on both governance best practices and patient needs, here are the non-negotiable requirements:
1. Mandatory Disclosure Policies
Every healthcare organization deploying AI must establish clear policies requiring clinician disclosure of AI involvement in patient care. This means:
Training clinicians on plain-language explanations of AI tools
Integrating disclosure into clinical workflows, not consent forms
Providing patient-accessible documentation of AI systems in use
Creating clear opt-out mechanisms with alternative care pathways
2. Documented Human Oversight
AI assurance requires demonstrable human review at critical decision points. Healthcare organizations must:
Define specific review requirements for each AI system based on risk level
Document where and how clinicians review AI recommendations
Establish audit trails showing human oversight occurred
Create accountability structures for oversight failures
3. Equity Audits and Community Engagement
Responsible AI deployment demands continuous assessment of disparate impact. This requires:
Mandatory bias testing before deployment and quarterly thereafter
Community advisory boards with decision-making authority
Health equity impact assessments for all AI tools
Public reporting of bias audit results
4. Patient Recourse Mechanisms
When AI systems contribute to care failures, patients need clear pathways for redress. Healthcare organizations must:
Establish incident reporting systems for AI-related concerns
Create patient advocacy roles with AI governance expertise
Provide transparent investigation processes
Ensure patients can request human-only care without penalty
These aren't theoretical frameworks. They're established requirements drawn from FDA guidance, WHO standards, and responsible AI principles. The question is: which healthcare organizations will implement them before harm occurs rather than after?
The Questions Your Organization Should Answer
If you lead a healthcare organization, develop AI systems, or oversee clinical implementation, consider these questions:
What specific policies ensure patients know when AI influences their care, not buried in 40-page consent forms, but through direct clinical disclosure?
How do you verify that clinicians are reviewing AI recommendations rather than rubber-stamping them? Where's your audit trail showing meaningful human oversight?
When was the last time you conducted a bias audit on your AI systems? Who reviews those results? What triggers a system's removal from production?
What mechanisms exist for patients who want to opt out of AI-assisted care? How do you ensure that exercising this right doesn't create access barriers or stigma?
Which communities most affected by your AI systems are represented in your governance structures? Do they have advisory roles or actual decision-making authority?
How do you measure patient understanding of AI in their care? What's your plan when understanding falls below acceptable thresholds?
The Path Forward: Transparency as Governance Imperative
I return often to those examination rooms at Mayo Clinic, where the integration of AI into my care continues. Some days, I'm shown exactly how a system works and why my clinician trusts its recommendations. Other days, I discover algorithmic involvement only by chance. This inconsistency—not AI itself—is what erodes trust.
Patients like me aren't asking for perfection. We're asking for honesty. We're asking for the transparency that regulatory frameworks demand and ethical principles require. We're asking for governance structures that ensure AI enhances rather than replaces human judgment and accountability.
The research is unequivocal: transparent disclosure of AI's role, meaningful human oversight, and equity-conscious implementation build trust. Opacity, automated decision-making without review, and dismissal of community concerns destroy it.
Healthcare organizations have a choice. They can implement responsible AI governance now through policy, through training, through accountability structures or they can wait until patient harm and eroded trust force regulatory intervention. The frameworks exist. The guidance is clear. What remains is the will to prioritize patient trust over deployment speed.
As someone who depends on these systems as a patient and helps implement governance as a strategist, I can tell you: the time for transparency isn't coming. It's here. The question is whether your organization will meet this moment with the accountability that patients deserve and regulations require.
---
About Dan
Dan Noyes operates at the critical intersection of healthcare AI strategy and patient advocacy. His perspective is uniquely shaped by over 25 years as a strategy executive and his personal journey as a chronic care patient navigating treatment at Mayo Clinic. Dan holds extensive AI certifications from Stanford, Wharton, and Google Cloud, grounding his strategic insights in deep technical knowledge of AI governance, ethics, and assurance frameworks. As a Responsible AI Healthcare Strategist and AI Policy & Practice Consultant, he helps organizations navigate the complex challenges of implementing AI systems that meet both regulatory requirements and patient needs—always through the lens of someone who has experienced both the promise and the governance gaps of healthcare AI firsthand.
You can reach Dan at (585) 230-9565
or via email at dan@viablehealthai.com
---
References
1. Journal of Participatory Medicine (2025). "Patient Perspectives on Artificial Intelligence in Health Care: A National Survey."
2. JMIR Formative Research (2025). "Patient Trust in AI Clinical Systems: The Role of Disclosure Timing and Transparency."
3. Pew Research Center (September 2025). "Americans' Views on the Use of Artificial Intelligence in Medicine."
4. FDA (2024). "Artificial Intelligence and Machine Learning in Software as a Medical Device: Guidance for Industry and Food and Drug Administration Staff."
5. World Health Organization (2024). "Ethics and Governance of Artificial Intelligence for Health: WHO Guidance."
6. National Institutes of Health (1979). "The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research."
7. Stanford University Center for Biomedical Ethics (2024). "AI Disclosure Practices in U.S. Healthcare Organizations: A Systematic Assessment."
8. Nature Digital Medicine (2024). "Clinical AI Deployments and Patient Communication: A Multi-Site Analysis."
9. UC Berkeley Center for Health Technology (2025). "Impact of Proactive AI Disclosure on Patient Trust: A Randomized Clinical Trial."
10. The Lancet Digital Health (2025). "Systematic Review of AI Disclosure Practices and Patient Acceptance in Clinical Settings."
11. Health Affairs (2024). "Patient Preferences for Human Oversight vs. Algorithmic Accuracy in Healthcare AI."
12. FDA (2024). "Good Machine Learning Practice for Medical Device Development: Guiding Principles."
13. European Medicines Agency (2024). "Reflection Paper on the Use of Artificial Intelligence in the Medicinal Product Lifecycle."
14. Johns Hopkins Medicine (2024). "Human Oversight in Clinical AI Systems: An Implementation Analysis."
15. JAMA Network Open (2024). "Racial and Socioeconomic Disparities in Trust of Healthcare AI Systems."
16. Nature Medicine (2025). "Bias Auditing Practices in Clinical AI: A Systematic Review."
17. American Medical Association (2024). "Healthcare AI Governance: Organizational Priorities and Barriers Survey."
18. Stanford University (2024). "Five-Year Analysis of Clinical AI Deployment Outcomes and Health Equity Impact."