Fig. 1.A Four-Dimensional Framework for Responsible Innovation
Figure 1 illustrates a multi-stakeholder framework for evaluating the ethical impact of AI in healthcare, emphasizing the interconnected responsibilities across four key domains: Patients, Healthcare Providers, AI Business Organizations, and
Society at Large. At the core is Societal Impact, which is influenced by how well these entities uphold ethical principles. Patients are concerned with rights, informed choice, data ownership, and privacy. Healthcare providers are responsible for ensuring clinical safety, transparency, and effectiveness. AI business organizations must implement responsible project practices and governance mechanisms. The continuous circular flow among these sectors reflects the dynamic, interdependent nature of
ethical accountability, suggesting that responsible AI in healthcare can only be achieved through ongoing collaboration and oversight across all levels.
Each domain integrates measurable indicators to guide evaluation. For example, in the “Security & Privacy” domain, considerations include compliance with GDPR and HIPAA, implementation of differential privacy, and real-time threat modelling⁴. Meanwhile, the “Fairness &
Non-discrimination” component requires institutions to audit AI systems for dataset bias, proxy discrimination, and algorithmic disparity across protected groups⁵.
One often overlooked aspect is the lack of proper explainability and transparency in clinical decision support systems. Studies show that without robust explainability, healthcare professionals may either over-trust or completely disregard AI outputs, potentially leading to
harmful consequences⁶. Moreover, misuse or over-reliance on generative models without secure guardrails can result in misinformation, over-diagnosis, or patient harm⁷.
A pressing concern is the use of AI code assistants and automation tools in software design. Evidence suggests that these tools can produce insecure or misleading outputs when developers overly rely on them without proper oversight⁸. In healthcare, such risks
are amplified due to the safety-critical nature of applications.
To operationalize these principles, a publicly available course has been developed, AI Ethics by Design, which offers structured guidance on embedding ethical principles throughout the AI development lifecycle—from ideation and dataset curation to deployment and monitoring6. This educational initiative
complements the broader goal of fostering responsible innovation across healthcare systems.
Finally, healthcare institutions must fulfil their duty of care by embedding ethics assessments into procurement processes, engaging multi-stakeholder oversight panels, and preparing for audits through traceable model documentation. A structured ethics review process, such as the one embedded in this lifecycle framework, is
no longer optional—it is foundational to the safe, just, and sustainable deployment of AI in healthcare.
Blockchain in Healthcare Today Platform Approaches Journal (BHTY) Corporate Advisory Council invites all stakeholders—clinicians, developers, policymakers, business leaders, and patient advocates—to contribute to shaping the future of responsible AI in healthcare by completing the AI Ethics in Healthcare Survey.
Take the survey here: https://tinyurl.com/ai-ethics-healthcare
Your input will help us better understand current practices, concerns, and priorities across the ecosystem.
Interested in joining the BHTY Corporate Advisory
Council? Click here for more information, or visit https://blockchainhealthcaretoday.com/index.php/journal/Council.
Your voice matters in building trustworthy, patient-centred, and ethically aligned AI systems.
Key References:
- Perry N, Chen Y, Bhagavatula C, Raghothaman M, Stansifer C, Devlin J, et al. Do users write more insecure code with AI assistants? [Preprint]. arXiv:2211.03622 [Internet]. 2022 [cited 2025 Jun 25]. Available from: https://arxiv.org/abs/2211.03622
- Tantithamthavorn C, McIntosh S, Hasan M, Treude C. Explainable AI for SE: Experts’ interviews, challenges, and future directions. IEEE Softw. 2023;40(4):33–40. doi:10.1109/MS.2023.3240342
- Brundage M, Clark J, Toner H, Eckersley P, Garfinkel B, Dafoe A, et al. Lessons learned on language model safety
and misuse. San Francisco: OpenAI; 2023 Feb. Available from: https://openai.com/research/lessons-learned-language-model-misuse
- European Commission. White Paper on Artificial Intelligence: A European Approach to
Excellence and Trust. Brussels: European Commission; 2020 Feb. Report No.: COM(2020) 65 final. Available from: https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
- National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). Gaithersburg, MD: U.S. Department of Commerce; 2023 Jan. NIST AI 100-1. Available from: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
- Ramachandran M. AI Ethics by Design [Internet]. Udemy; 2023 [cited 2025 Jun 25]. Available from: https://www.udemy.com/course/ai-ethics-by-design/?couponCode=ST16MT230625A