Artificial Intelligence in Healthcare
Frameworks for AI Evaluation and Governance
This research focuses on integrating AI in healthcare with an emphasis on ethical considerations and best practices. The goal is to ensure that AI systems are designed to promote health equity and improve patient outcomes. The work includes organizing conferences to establish governance frameworks for AI applications in medicine.
-
AI Conferences
- Conference: Signal Through The Noise: What Works, What Lasts, and What Matters in Healthcare AI
- Conference: "Blueprints for Trust: Best Practices and Regulatory Pathways for Ethical AI in Healthcare" - see publications below that resulted from this conference
-
Publications
- Toward responsible AI governance: Balancing multi-stakeholder perspectives on AI in healthcare (in IJMI)
- Toward a responsible future: recommendations for AI-enabled clinical decision support. (In JAMIA)
- Towards a Multi-Stakeholder process for developing responsible AI governance in consumer health (in IJMI)
- Towards Responsible AI in Healthcare Getting Real About Real-World Data and Evidence. – (In JAMIA) J Am Med Inform Assoc. 2025.
- A Proposed Strategy on Integrating Health Equity and Racial Justice into the Artificial Intelligence Development Lifecycle
The 2025 DCI Network conference and workshop laid the groundwork for a coordinated, multidisciplinary effort to define responsible frameworks for AI in healthcare. These four resulting papers, authored by over 50 leaders from academia, industry, regulatory bodies, and patient advocacy, offer complementary perspectives on critical domains: real-world data (RWD), clinical decision support (CDS), consumer health, and cross-cutting governance structures. Together, they reflect months of consensus-building and analysis, advancing the field from broad ethical principles toward domain-specific action frameworks that balance innovation with safety, efficacy, equity, and trust (SEET).
The paper “Towards Responsible AI in Healthcare – Getting Real About Real-World Data and Evidence” (JAMIA, in press) examines the challenges of using real-world data (RWD) to power AI applications in healthcare. It identifies urgent concerns around bias, insufficient data documentation, privacy, and the lack of accountability in how data is sourced and used. The authors recommend actionable strategies including standardized metadata practices, instructional “nutrition labels” for AI models, cross-disciplinary training programs, and processes for continuous model monitoring. The paper stresses that without addressing foundational data quality and governance, the promise of AI in real-world settings cannot be safely realized.
In “Toward a Responsible Future: Recommendations for AI-Enabled Clinical Decision Support” (JAMIA, 2024), the authors focus on the unique demands of AI in clinical decision support systems (AI-CDS). Drawing on workshops and expert input, the paper proposes a practical framework to ensure safety and trust in AI-CDS through pre-deployment validation, post-market surveillance, national safety reporting systems, and clinician training. The consensus process emphasized that the dynamic and evolving nature of AI models, especially those using large language models, necessitates adaptive governance mechanisms that include ongoing oversight and real-time error monitoring to prevent harm and bias in clinical settings.
The paper “Towards a Multi-Stakeholder Process for Developing Responsible AI Governance in Consumer Health” (IJMI, 2025) highlights the distinct challenges of consumer-facing AI tools, such as mobile apps and wearables. Recognizing that traditional healthcare regulations are often too slow or broad, the paper proposes the creation of the Health AI Consumer Consortium (HAIC2) to empower patient representation. It introduces the “SAM-I-AM” stakeholder framework to align incentives and recommends “nutrition label”-style disclosures for AI tools, post-market feedback mechanisms, and rapid-cycle governance through prototype evaluation. The approach balances innovation with real-world accountability in a domain where consumer risk is rising.
Finally, “Toward Responsible AI Governance: Balancing Multi-Stakeholder Perspectives on AI in Healthcare” (IJMI, 2025) synthesizes findings from across the DCI Network deliberations to propose a cognitive framework for AI governance grounded in tradeoffs between speed, scope, and capability. It outlines three intermediate-level governance models tailored to CDS, RWD, and consumer health, emphasizing voluntary certification, participatory oversight, and risk-based stratification of governance needs. The paper underscores that responsible AI in healthcare demands flexible, stakeholder-aligned governance structures that adapt as the technology and its societal implications evolve.
Together, these papers represent a significant contribution to the practical governance of healthcare AI. They move the field beyond aspirational principles toward implementable actions rooted in interdisciplinary collaboration and informed by real-world application domains. The upcoming 2025 DCI Network AI Conference will build on these foundations to chart the next phase of responsible innovation.
AI Architectures
The paper titled “A comprehensive cancer center in the cloud powered by AI can reduce health disparities” (Ngwa et al., Nature Medicine, 2024) present the vision and initial implementation of the Comprehensive Cancer Center in the Cloud (C4) — an AI-powered, cloud-based platform designed to reduce cancer health disparities, especially among Black, Indigenous, and other underserved populations. This initiative arose from collaborations at the Global Health Catalyst summit and aims to address structural barriers often abbreviated as GREAT (Geography, Race, Earnings, Attitude, and Technology).The C4 model integrates community hubs, often within religious organizations, equipped with broadband internet, laptops, and trained community health assistants. These hubs serve as local access points to high-quality cancer care and prevention resources, leveraging AI tools to deliver telehealth services, promote health literacy, and support decision-making.
At the heart of C4 is the CARE app, structured around four key pillars:
- Care – Delivers AI-supported telehealth, mental health support, screening reminders, risk modification education, and follow-up care.
- Advocacy – Engages community stakeholders and policymakers through the Global Health Catalyst network to drive systemic change.
- Research – Facilitates data-driven implementation science and global clinical trial collaborations, especially for interventions like hypofractionated radiotherapy.
- Education – Through the Global Oncology University (GO-U), supports training programs for healthcare workers and researchers, with a focus on underrepresented communities.
C4 addresses social, physical, and spiritual health, incorporating culturally sensitive approaches and spiritual leadership to increase trust and uptake. The platform's modular and scalable design also allows expansion beyond cancer care to other chronic diseases.
AI and Equity
The article “A Proposed Framework on Integrating Health Equity and Racial Justice into the Artificial Intelligence Development Lifecycle” presents a structured framework to embed ethical principles, health equity, and racial justice into each phase of AI development in healthcare. Prompted by the disproportionate burden of COVID-19 on Black, Indigenous, and other socially disadvantaged populations, the authors emphasize the risk of perpetuating structural inequalities if AI tools are developed and deployed without an intentional equity lens. They argue that algorithmic bias, non-representative data, and lack of accountability can deepen health disparities, particularly when AI is scaled rapidly during public health crises.
To address these risks, the authors propose a seven-phase lifecycle framework for AI in healthcare, integrating equity-oriented practices at each stage. The framework prioritizes stakeholder engagement, inclusive data practices, culturally aware design, and continuous monitoring to ensure that AI tools support just and fair outcomes. The authors also align common AI ethics principles (such as fairness, transparency, and accountability) with concrete health equity and racial justice actions, providing a roadmap for operationalizing responsible AI in real-world clinical settings.
Seven Phases of the Health Equity–Focused AI Lifecycle:
- Needs Assessment – Engage stakeholders and define equity-aligned objectives early.
- Workflow Analysis – Examine existing system barriers and resource needs.
- Target Definition – Establish metrics sensitive to equity and justice.
- System Development – Ensure diverse training data and bias-aware model design.
- Implementation – Build trust through transparency, explainability, and inclusion.
- Monitoring & Evaluation – Track disparities in usage and outcomes.
- Maintenance & Updating – Adapt to changing clinical, demographic, and ethical contexts.
Citations
- Rozenblit L, Price A, Solomonides A, Joseph AL, Srivastava G, Labkoff S, deBronkart D, Singh R, Dattani K, Lopez-Gonzalez M, Barr PJ, Koski E, Lin B, Cheung E, Weiner MG, Williams T, Thuy Bui TT, Quintana Y. Towards a Multi-Stakeholder process for developing responsible AI governance in consumer health. Int J Med Inform. 2025 Mar;195:105713. doi: 10.1016/j.ijmedinf.2024.105713. Epub 2024 Nov 22. PMID: 39642592. DOI: 10.1016/j.ijmedinf.2024.105713
- Rozenblit L, Price A, Solomonides A, *Joseph AL, Koski E, *Srivastava G, Labkoff S, Bray D, Lopez-Gonzalez M, Singh R, deBronkart D, Barr PJ, Szolovits P, Dattani K, Jaffe C, Fridsma D, Baris R, Leftwich R, Stolper R, Weiner MG, Pastor N, Sanchez Luque U, Lin B, Bui TTT, Oladimeji B, Williams T, Jackson GP, Hsueh PYS, Quintana Y. Toward responsible AI governance: Balancing multi-stakeholder perspectives on AI in healthcare. International Journal of Medical Informatics. 2025;203:106015. ISSN 1386-5056. https://www.sciencedirect.com/science/article/pii/S1386505625002321
- Koski E, Das A, Hsueh PYS, Solomonides A, Joseph AL, Srivastava G, Johnson CE, Kannry J, Oladimeji B, Price A, Labkoff S, Bharathy G, Lin B, Fridsma D, Fleisher LA, Lopez-Gonzalez M, Singh R, Weiner MG, Stolper R, Baris R, Sincavage S, Naumann T, Williams T, Bui TTT, Quintana Y. Towards Responsible AI in Healthcare – Getting Real About Real-World Data and Evidence. J Am Med Inform Assoc. 2025. In Press
- Ngwa W, Pressley A, Wilson VM, Marlink R, Quintana Y, Chipidza F, Deville C, Quon H, Avery S, Patrick L, Ngwa K. A comprehensive cancer center in the cloud powered by AI can reduce health disparities. Nat Med. 2024 Sep;30(9):2388-2389. PMID: 39009778. DOI: 10.1038/s41591-024-03119-y
- Dankwa-Mullan, I, Scheufele, E, Matheny, M.E., Quintana, Y., Chapman, W.W., Jackson, G., South, B.R. A Proposed Strategy on Integrating Health Equity and Racial Justice into the Artificial Intelligence Development Lifecycle. Journal of Health Care for the Poor and Underserved. 2021. 32(2): 300–317. DOI: 10.1353/hpu.2021.0065.
