Fig. 1.A Four-Dimensional Framework for Responsible Innovation
Figure 1 illustrates a multi-stakeholder framework for evaluating the ethical impact of AI in healthcare, emphasizing the interconnected responsibilities across
four key domains: Patients, Healthcare Providers, AI Business Organizations, and Society at Large. At the core is Societal Impact, which is influenced by how well these entities uphold ethical principles. Patients are concerned with rights, informed choice, data ownership, and privacy. Healthcare providers are responsible for ensuring clinical safety, transparency, and effectiveness. AI business organizations must implement responsible project practices and governance mechanisms. The continuous
circular flow among these sectors reflects the dynamic, interdependent nature of ethical accountability, suggesting that responsible AI in healthcare can only be achieved through ongoing collaboration and oversight across all levels.
Each domain integrates measurable indicators to guide evaluation. For example, in the “Security & Privacy” domain, considerations include compliance with GDPR and HIPAA,
implementation of differential privacy, and real-time threat modelling⁴. Meanwhile, the “Fairness & Non-discrimination” component requires institutions to audit AI systems for dataset bias, proxy discrimination, and algorithmic disparity across protected groups⁵.
One often overlooked aspect is the lack of proper explainability and transparency in clinical decision support systems. Studies show that
without robust explainability, healthcare professionals may either over-trust or completely disregard AI outputs, potentially leading to harmful consequences⁶. Moreover, misuse or over-reliance on generative models without secure guardrails can result in misinformation, over-diagnosis, or patient harm⁷.
A pressing concern is the use of AI code assistants and automation tools in software design. Evidence
suggests that these tools can produce insecure or misleading outputs when developers overly rely on them without proper oversight⁸. In healthcare, such risks are amplified due to the safety-critical nature of applications.
To operationalize these principles, a publicly available course has been developed, AI Ethics by Design, which offers structured guidance on embedding ethical principles
throughout the AI development lifecycle—from ideation and dataset curation to deployment and monitoring6. This educational initiative complements the broader goal of fostering responsible innovation across healthcare systems.
Finally, healthcare institutions must fulfil their duty of care by embedding ethics assessments into
procurement processes, engaging multi-stakeholder oversight panels, and preparing for audits through traceable model documentation. A structured ethics review process, such as the one embedded in this lifecycle framework, is no longer optional—it is foundational to the safe, just, and sustainable deployment of AI in healthcare.