Understanding ELSA
ELSA is an internal generative AI tool developed by the FDA. Its primary objective is to enhance the efficiency and speed of regulatory processes by assisting the FDA.
The U.S. Food and Drug Administration (FDA) officially launched its new generative Artificial Intelligence (AI) tool, ELSA (no official acronym has been released yet), on June 2, 2025. ELSA is designed to assist FDA staff in various core functions. ELSA represents a significant step in integrating AI into regulatory science. This series of briefs will summarize ELSA's purpose, its anticipated impact on patients, including myself, and provide a critical perspective on its advantages and disadvantages, incorporating scientific viewpoints from leading institutions such as Harvard Medical School, Massachusetts General Hospital, and Stanford University.
What is ELSA?
ELSA is an internal generative AI tool developed by the FDA. Its primary objective is to enhance the efficiency and speed of regulatory processes by assisting FDA staff with a range of tasks, including:
Reviewing clinical protocols: Expediting the initial assessment of study designs.
Summarizing adverse event reports: Quickly synthesizing large volumes of safety data.
Comparing product labels: Identifying discrepancies or similarities across different product information.
Identifying high-priority inspection targets: Using data to inform where regulatory oversight is most needed.
The FDA states that ELSA operates within a secure GovCloud environment and is designed not to train on industry-submitted data, aiming to maintain data integrity and security.
Confronting Bias, Safety, and Governance in Healthcare AI
Confronting Bias, Safety, and Governance in Healthcare AI
Best practices in confronting data bias, system safety, and data governance in your medical AI solutions.
Summary: Best practices in confronting data bias, system safety, and data governance in your medical AI solutions.
In the quiet waiting rooms of Mayo Clinic, where I have spent countless hours as a patient navigating my own chronic condition, I have watched the slow transformation of medicine unfold before me. The physicians who care for me now consult algorithms as readily as they do stethoscopes. Electronic health records flash across screens, and diagnostic tools powered by artificial intelligence offer recommendations with mathematical certainty. Yet beneath this technological marvel lies a more complex truth: the same systems designed to heal us may also perpetuate the very inequities and biases we thought we had left behind. The Mayo Clinic is leading the way in all of these areas, but this is not true for all the medical institutions I have encountered.
The promise of AI in healthcare is undeniable: precision diagnostics that can detect the subtleties my own condition demands, optimized treatments tailored to individual genetic profiles, and administrative efficiencies that could return precious time to the patient-physician relationship. But as someone who has lived within this system, who has felt both its embrace and its limitations, I understand that our greatest technological achievements carry within them our deepest human flaws. The ethical imperatives we face—addressing algorithmic bias, ensuring patient safety, and establishing robust governance—are not merely technical challenges. They are fundamentally questions about the kind of care we wish to receive when we are most vulnerable.
The Inherited Patterns of Bias
AI systems, like medical students, learn from the data they are given. And if that data reflects decades of healthcare disparities, the AI will perpetuate and amplify those same inequities with algorithmic precision. This is not a distant concern but a present reality that affects real patients seeking care today.
Consider the patient with darker skin who presents with concerning lesions. An AI diagnostic tool trained predominantly on images of light-skinned individuals may fail to recognize the subtle manifestations of skin cancer in this patient, leading to delayed diagnosis and poorer outcomes. Or consider the algorithm designed to predict healthcare costs rather than illness severity. This seemingly reasonable approach inadvertently directs sicker Black patients away from necessary interventions because historical spending patterns reflect, rather than correct for, systemic disparities.
These are not hypothetical scenarios but documented failures that reveal how deeply embedded biases can become encoded in our most sophisticated tools. The sources of such bias are manifold: training datasets that lack diversity across racial, ethnic, gender, and socioeconomic lines; human prejudices that seep into data labeling and problem selection; the use of proxy variables that correlate with protected characteristics. In my own experience as a patient, I have witnessed how zip codes, insurance types, and referral patterns can influence care pathways—patterns that AI systems learn and perpetuate without the conscious bias that might, at least, be recognized and challenged.
For those of us who depend on these systems—as patients and as providers—understanding these sources of bias becomes a matter of survival. We must be vigilant not only in identifying discriminatory impacts but in demanding the transparency and accountability that can prevent them.
The Precarious Balance of Safety
The hospital room where I have received treatment is a testament to the extraordinary safety protocols that medicine has developed over decades. Yet as AI increasingly influences clinical decisions, new categories of risk emerge that challenge our traditional approaches to patient safety.
While AI offers the potential to reduce human error through automation and enhanced diagnostic capabilities, it introduces risks that are unique to itself. System malfunctions can occur in ways that are difficult to predict or understand. More insidiously, "automation bias" can lead clinicians to over-rely on AI recommendations, potentially diminishing the critical thinking that has always been medicine's greatest safeguard.
I have observed the subtle dance between physician and algorithm in my care, the moment of hesitation when a recommendation doesn't align with clinical intuition, the careful weighing of data against experience. But what happens when that dance becomes too trusting, when the algorithm's confidence overrides human judgment? An AI system that misinterprets the complexities of a chronic condition like mine, leading to an incorrect diagnosis or suboptimal treatment plan, could have consequences that extend far beyond a single clinical encounter.
The challenge is compounded by the "black box" nature of many advanced AI models, where the reasoning behind a decision remains opaque even to the clinicians who must act on it. ECRI, a leading patient safety organization, has consistently identified AI as one of the top threats to patient safety, emphasizing the urgent need for safeguards that can keep pace with technological advancements.
In my experience as a patient, I have learned that safety in healthcare is built on relationships—the trust between patient and provider, the transparency of communication, the shared understanding of risks and benefits. As we integrate AI into this delicate ecosystem, we must ensure that these fundamentally human elements are not lost but enhanced.
The Architecture of Accountability
Effective governance of healthcare AI is not merely about policy documents and regulatory compliance—though these are essential. It is about creating frameworks that honor the trust patients place in the healthcare system when they are at their most vulnerable.
The essential elements of such governance begin with transparency. AI systems must be transparent about their decision-making processes, the data they utilize, and the confidence levels associated with their outputs. This includes clear documentation, rigorous version control, and comprehensive audit trails that allow for meaningful oversight.
Accountability requires clear lines of responsibility for AI outcomes. When an AI system makes an error, we must know who is responsible—the developer who created the algorithm, the healthcare organization that deployed it, the clinician who acted on its recommendation, or some combination of all three. This is not about assigning blame but about ensuring that ethical conduct and patient well-being remain at the center of our technological advancement.
Fairness and bias mitigation must be built into the governance structure from the beginning, not added as an afterthought. This entails mandating inclusive data collection practices, implementing continuous monitoring for algorithmic bias, and establishing mechanisms for timely intervention when issues are identified.
Patient-centered policies must ensure that individuals have a meaningful say in how AI is used in their care. Patients should have the right to understand when AI is involved in their treatment, to access information about how these systems work, and to opt out if they are uncomfortable with algorithmic decision-making.
Ultimately, regulatory compliance must keep pace with evolving standards, such as HIPAA and GDPR, as well as emerging AI-specific regulations, to safeguard patient privacy and data integrity in an increasingly interconnected healthcare landscape.
The sobering reality is that comprehensive AI governance frameworks remain relatively uncommon in healthcare organizations. This gap between the pace of technological adoption and the development of ethical safeguards represents one of the most pressing challenges facing healthcare today.
Tools for Ethical Practice
The ethical considerations surrounding healthcare AI are not merely theoretical—they influence the practical utility and effectiveness of the tools that shape patient care. Consider the example of multimodal AI models, such as MedGemma, which can accurately interpret medical imaging and synthesize electronic health records. Its open-source and locally deployable nature offers several advantages for ethical implementation.
By promoting data sovereignty, MedGemma enables hospitals and clinics to deploy AI on local servers, thereby maintaining control over sensitive patient data and avoiding exposure to third-party cloud services. This addresses significant privacy concerns that many patients, myself included, have about how our medical information is stored and accessed.
The open-source nature facilitates customization and transparency, potentially enabling local teams to audit for biases relevant to their specific patient populations and to gain a deeper understanding of the model's inner workings. This transparency is crucial for building the trust that effective patient-provider relationships require.
Perhaps most importantly, by democratizing access to advanced diagnostic capabilities, such tools could bring cutting-edge technology to underfunded hospitals and rural clinics, potentially reducing disparities in access to high-quality care. However, this accessibility also raises important questions about institutional responsibility: if powerful, free tools exist that could improve patient outcomes, what is the ethical obligation of healthcare organizations to adopt them?
Similarly, advanced research platforms like Co-Scientist contribute to the ethical use of AI by enhancing evidence-based practice. By rapidly synthesizing vast amounts of medical literature, such tools can help ensure that clinical decision support systems are informed by the most current and comprehensive evidence, potentially reducing treatment variations and improving patient safety across different healthcare settings.
The Path Forward
As I sit in examination rooms, watching my physicians navigate the intersection of human judgment and algorithmic insight, I am struck by both the tremendous potential and the profound responsibility that AI brings to healthcare. The promise is real—more accurate diagnoses, personalized treatments, reduced medical errors, and expanded access to high-quality care. But the realization of this promise depends entirely on our commitment to addressing the ethical challenges that accompany these powerful tools.
Addressing algorithmic bias, rigorously safeguarding patient safety, and establishing robust governance frameworks are not optional considerations—they are foundational requirements for a healthcare system worthy of the trust patients place in it. For those of us who depend on this system, whether as patients or providers, understanding these challenges and actively engaging in dialogue about them is not just professional responsibility—it is a moral imperative.
The future of healthcare AI will be shaped by the choices we make today. By demanding transparency, advocating for fairness, and insisting on clear accountability, we can help ensure that AI truly serves as a powerful ally in delivering equitable, safe, and ultimately more human patient care for all. Ultimately, the most sophisticated algorithm must serve the most fundamental human need: to be cared for with dignity, compassion, and wisdom when we are most in need of healing.
References
Morley, J., & Floridi, L. (2020). An ethically mindful approach to AI for Health Care. SSRN Electronic Journal.
World Health Organization. (2024). Ethics and governance of artificial intelligence for health. Guidance on large multi-modal models. WHO Press.
Gerke, S., et al. (2022). Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? Frontiers in Surgery, 9.
Obermeyer, Z., et al. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.
ECRI. (Annual). Top 10 Health Technology Hazards. (Various years, frequently highlights AI safety concerns).
Miller, D. D., & Brown, E. W. (2018). Artificial Intelligence in Medical Practice: The Question to the Answer? American Journal of Medicine, 131(2), 129-133.
Manceps. (2025, July 7). MedGemma: A New Era for Healthcare AI.
Google Research. (2025, February 18). Towards an AI co-scientist.
Patients’ AI Bill of Rights - The Foundation
Ethical AI is not a destination; it is an ongoing process of vigilance, adaptation, and unwavering commitment. By championing this AI Patients' Bill of Rights, we can ensure that technology serves humanity, building a future where innovation enhances care while preserving our most fundamental human values.
The concepts of creating a patient's bill of rights, including a respect for persons, beneficence, and justice, as outlined in the Belmont Report.
Executive Summary
With the rapid development and adoption of AI technologies, I'm finding that patients are seldom part of the discussion. Since being diagnosed with a chronic medical condition, I have become a foster child of medicine and hospital administration. I am a certified Patient Leader, volunteering weekly at my local hospital. I am a member of HIMSS and hold certifications from Stanford University School of Medicine and Johns Hopkins for my expertise in healthcare AI. In other words, I see healthcare AI from a very unique perspective. So while there are reports about a Healthcare Bill of Rights from some sources, for me, it is personal.
Frankly, the application of AI technologies could have saved me from a medical error that nearly cost me my life. So when I talk about a Patient's AI Bill of Rights, I see the necessity and opportunity to create a discussion with patients that can strengthen the doctor-patient relationship in new ways that can improve care and communication.
This report examines the enduring relevance of the Belmont Report's ethical principles: Respect for Persons, Beneficence, and Justice as foundational guiding principles for the responsible development and deployment of Artificial Intelligence (AI) in healthcare.
Beyond the Algorithm: Forging a Patient-Centric Future with an AI Bill of Rights
Artificial Intelligence is not on the horizon; it’s in our clinics, our diagnostic labs, and our administrative offices. AI is rapidly transforming the very landscape of medicine, offering a future of incredible precision, efficiency, and personalized care. From analyzing vast datasets to detect illness earlier to automating routine tasks, its potential to enhance global health outcomes is immense.
However, with this great promise comes a profound responsibility. The integration of AI introduces complex ethical challenges, including patient privacy, algorithmic bias, and the "black box" problem, where AI's decision-making process is dangerously opaque. This lack of transparency is a direct threat to patient trust—the very bedrock of healthcare.
To navigate this new frontier, we don't need to invent a new moral compass. We need to adapt a proven one. The 1979 Belmont Report, with its core principles of Respect for Persons, Beneficence, and Justice, provides the essential ethical framework to guide us. My analysis translates these foundational duties into a clear, actionable
AI Patients' Bill of Rights designed to empower patients and guide responsible innovation.
The First Principle: Respect for Persons
Core Tenet: Treat individuals as autonomous agents and protect those with diminished autonomy.
In the age of AI, this principle is tested in new and critical ways. Respect is not passive; it is an active acknowledgment of a patient's right to make informed decisions about their own body and data. The primary challenge is the "black box" nature of many AI systems, which fundamentally conflicts with true informed consent. If a patient cannot understand how an AI uses their data to arrive at a recommendation, their consent is compromised.
This leads to a core vulnerability:
AI paternalism, where an algorithm prioritizes goals that may not align with a patient's personal values, stripping them of control and agency. Furthermore, we must recognize that AI creates new categories of vulnerable populations—not just children or the elderly, but those on the wrong side of the digital divide, who may lack the technological literacy to navigate this new landscape.
The Second Principle: Beneficence & Non-Maleficence
Core Tenets: Maximize benefits and, above all, do no harm.
The potential benefits of AI are staggering. It can enhance diagnoses, personalize treatments, improve healthcare access in underserved areas, and increase operational efficiency, freeing clinicians to focus on complex patient needs. Emerging applications can even accelerate mRNA treatment development and transform immune cells into cancer killers.
But these benefits cannot come at the cost of patient safety. The risks are significant: algorithmic errors, data breaches, and unintended consequences that can exacerbate health disparities. To honor the principle of "do no harm," we must move beyond reactive fixes. A robust, risk-based framework is essential, where high-risk AI tools (like those in autonomous surgery) undergo the most stringent testing, monitoring, and oversight.
Crucially, AI must always augment, not replace, human clinical judgment. The "human-in-the-loop" is not just for accountability; it is a critical fail-safe to catch the errors and biases that automated systems will inevitably miss, ensuring we uphold our most sacred oath.
The Third Principle: Justice
Core Tenet: Ensure the fair distribution of benefits and risks. Perhaps the greatest danger of improperly governed AI is its potential to amplify and entrench existing societal biases.
Algorithmic bias is not a technical glitch; it is a reflection of systemic inequity encoded into logic. When AI is trained on non-representative data, it produces skewed and unfair results.
We've already seen this happen. Skin cancer algorithms perform less accurately on darker skin tones because of biased training data. An algorithm designed to predict healthcare needs assigned lower risk scores to Black patients because it used cost of care as a proxy for sickness, failing to account for the fact that less money is historically spent on their care due to systemic barriers. This is not just bad data; it is injustice, scaled by technology.
Mitigating this requires a multi-faceted approach: intentionally inclusive data collection, regular bias audits, multi-stakeholder review boards, and a commitment to Explainable AI (XAI). While AI can be a tool to reduce disparities by identifying and targeting interventions, this outcome is not automatic. Justice must be by design.
A Proposed AI Patients' Bill of Rights
Synthesizing these principles, I propose the following rights as the cornerstone of patient-centric healthcare AI. This is not a checklist but an integrated framework where each right reinforces the others, creating a comprehensive safety net for patients.
The Right to Informed Consent for AI Use: You must be told when AI is being used in your care and must consent to it.
The Right to Understand AI's Role: You have the right to a clear, plain-language explanation of what the AI does, its limitations, and how it uses your data.
The Right to Data Privacy and Security: Your health data must be protected with robust security, and you have the right to know how it is collected and used.
The Right to Refuse AI-Supported Care: You retain the right to opt-out of AI-driven diagnosis or treatment without penalty.
The Right to Human Oversight and Intervention: An accountable human professional must always be in the loop to review and override AI-generated recommendations.
The Right to Safety from AI Harms: AI tools must be rigorously tested, monitored, and proven safe and effective.
The Right to Equitable AI Treatment: You have the right to care from AI systems that are free from bias and have been designed to ensure equitable outcomes for all populations.
The Right to Accountability: In the event of an error, there must be a clear and transparent process to determine responsibility.
The Right to Control Your Data: You must have meaningful control over how your personal health information is used for AI development and deployment.
The Right to Access for All: AI technologies must be designed and deployed in a way that is accessible to all, including vulnerable populations and those with limited digital literacy.
The Path Forward
Implementing this vision requires a shared commitment.
Policymakers must develop agile, adaptive regulatory frameworks that keep pace with technology, moving beyond a one-size-fits-all approach.
Developers and Institutions must collaborate on everything from building diverse datasets to engaging patients in the co-design of AI systems.
Healthcare Providers must embrace a new role as skilled evaluators and interpreters of AI tools, with continuous education on the ethical implications being mandatory.
Ethical AI is not a destination; it is an ongoing process of vigilance, adaptation, and unwavering commitment. By championing this AI Patients' Bill of Rights, we can ensure that technology serves humanity, building a future where innovation enhances care while preserving our most fundamental human values.