Timo Minssen
Timo Minssen

AI in Law & Biosciences Expert

Sebastian Porsdam Mann
Sebastian Porsdam Mann

Ethics & Human Rights AI Expert

Promoting innovation and regulating generative AI in the Health Sector 

A tale of delicate balances, trade-offs, and the need for pragmatic solutions in regulatory eco-systems

The power and breakneck speed of generative artificial intelligence (AI)’s evolution has enormous positive potential. Its ability to democratize access to information as well as to the means to process and transform it holds promise for the facilitation of science and innovation. However, it is also becoming increasingly clear that realizing the vast promise of generative AI requires proactively addressing its significant attendant risks, such as the effects of biases in training data[1], environmental and compute costs, and the tendency towards hallucinations and other inaccuracies. In addition to these we will have to consider and address the multifaceted philosophical, ethical, legal, and policy questions concerning intellectual property rights, authors’ rights, and the attribution of credit, blame, and responsibility for harms and benefits stemming from outputs produced by generative AI,[2] alone or in combination with humans.[3]


In medical contexts, these issues intersect with sensitive and personal health data, high-risk medical devices, and life-or-death decision-making.[4] Resolving the resulting trade-offs between safety, efficacy, caution, ambition, competition, and innovation is at the core of many regulative efforts around the world.[5] As generative AI capabilities grow, regulatory frameworks must evolve responsibly to ensure this technology flourishes in medically impactful ways while protecting patients from unintended harms. Writing in 1983, soon-to-be International Court of Justice Judge Christopher Weeramantry describes international human rights scholars, lawyers, and legislators as humanity’s watchmen charged with keeping reactions and adaptations to ever-more-powerful technologies up-to-date and fit for purpose. He feared that if caught unaware by the rapidity of developments in medical and information technology,  these ‘slumbering sentinels’ were in danger of dereliction of duty.


While today’s regulators are certainly not sleeping on the job, the pace of generative AI’s evolution threatens to outstrip even the most agile oversight. Avoiding hazards will require regulators, ethicists, and developers to cooperate in establishing adaptive governance tailored to this technology’s breakneck growth. And, indeed, efforts are underway. In 2021, the EU proposed its Artificial Intelligence Act (AIA) to harmonize AI regulation across member states.[6] Interacting with a rapidly evolving and complex ecosystem of adjacent legislation, such as the GDPR,[7] the MDR,[8] the EHDS,[9] and the Data Act,[10] etc., the AIA takes a risk-based approach, with stringent oversight of high-risk systems significantly impacting safety or rights. AI-enabled medical devices will likely often fall under the “high-risk” classification, facing key AIA requirements around data governance, documentation, transparency, human oversight, accuracy, robustness, and cybersecurity.


As negotiations go on, many of the AIA’s strongest provisions, such as an outright ban on social scoring systems, have been or look likely to be watered down. This tracks a similar adaptation to industry interests occurring during the drafting of the Chinese interim regulations on generative AI, where an early requirement that model outputs be ‘truthful and accurate’ has been removed. For better or for worse, these changing policy perspectives illustrate the enormous stakes of the task at hand. Given the impact not only on businesses, industry, innovation, and the knowledge commons but also on individuals’ access to and protection from powerful technology, the philosophies and interests colliding in this arena render it among the most consequential policy battles of our time. Calls for banning or (over-)regulating specific applications in the EU will have to be balanced against both competitive disadvantages and health risks due to missed opportunities by setting high regulatory thresholds. It can therefore be assumed that the significance of so-called regulatory sandboxes will grow, although this is a concept in need of further clarification.


One important but underappreciated source of inspiration for resolving some issues can also be found in the principles of international human rights law. In 2021, UNESCO released two policy documents of direct relevance to the regulation of generative AI: the Recommendation on the Ethics of Artificial Intelligence[11] and the Recommendation on Open Science.[12] Building, in both cases, on general human rights norms, these documents emphasize the need for transparency, oversight, and accountability to uphold human dignity and call for AI systems to be robust, safe, and accurate while protecting privacy. While these are familiar calls, they are complemented in international human rights law by less prominent, but equally important, rights surrounding access to and participation in these technologies, the sharing of the resulting benefits, and the prevention of the further dilution of epistemic standards in the knowledge commons.[13] These rights and others are guaranteed by the little-known and much-misunderstood Article 15 of the International Covenant on Economic, Social and Cultural Rights, which imposes obligations on States parties inter alia to “conserve, develop, and diffuse” and to “enjoy the benefits of the progress” of science and its applications.[14]


In addition, the WHO has collaborated with a selected group of internal and external experts to develop new guidance on the ethics, governance, and regulation of medical AI. Some of these have already been published and include sections with guidance that is directed to, and crafted for, specific stakeholders in the health-sector, such as medical practitioners, AI developers, and policy-makers.[15] This also includes forthcoming guidance on the use of generative AI in health applications.


Though these principles may, with time, help adjudicate disputes of interests and provide high-level guidance in the regulation and implementation of (generative) AI, many of the most pressing issues will continue to operate at much lower levels of abstraction, i.e. in the concrete day-to-day practice of health practitioners and in the response of individual companies to generative AI and attendant regulations. The principles called for in emerging regulations, human rights documents, memoranda, and WHO Guidance present important guideposts for the responsible development of (generative) AI.


However, in light of an increasingly complex and overlapping regulatory eco-system in the health-sector[16], translating those aspirations into practical and feasible implementations remains profoundly challenging. This translation requires thoughtful collaboration between diverse stakeholders – regulators, developers, ethicists, and users alike. Bridging this gap is the urgently needed work startups like Daiki aim to undertake.


Though regulation sets expectations, it is companies themselves that must ultimately operationalize responsible generative AI through a steadfast commitment to beneficence. Part of this task lies in ensuring compliance with existing regulations, an objective that can be difficult to navigate given the complexity of the landscape. However, the bigger challenge lies in identifying and striving for the practical fulfillment of normative goals beyond simple compliance. This is a task that is crucial for society to realize the vast potential of (generative) AI while upholding human values. It will require concerted efforts from its many constituent parts, including regulators, academics, companies, and the public. Startups like Daiki, and the development of interoperable methods and new models of achieving the feasible implementation of laws and ethics, will play an increasingly significant role in daily practice and competition.



[1] Timo Minssen, Sara Gerke, Mateo Aboy, Nicholson Price, Glenn Cohen, Regulatory responses to medical machine learning, Journal of Law and the Biosciences, Volume 7, Issue 1, January-June 2020, lsaa002, https://doi.org/10.1093/jlb/lsaa002

[2] Porsdam Mann, S., Earp, B.D., Nyholm, S. et al. Generative AI entails a credit–blame asymmetry. Nat Mach Intell 5, 472–475 (2023). https://doi.org/10.1038/s42256-023-00653-1

[3] Porsdam Mann, S., Earp, B.D., Møller, N. et al. AUTOGEN: A personalized large language model for academic enhancement—ethics and proof of principle. AJOB 2023, 23(10); 28. 41 https://doi.org/10.1080/15265161.2023.2233356

[4] See e.g. Minssen T, Vayena E, Cohen IG. The Challenges for Regulating Medical Use of ChatGPT and Other Large Language Models. JAMA. 2023 Jul 25;330(4):315-316. doi: 10.1001/jama.2023.9651. Erratum in: JAMA. 2023 Sep 12;330(10):974. PMID: 37410482.

[5] Gerke, S., Minssen, T., & Cohen, G. (2020). Ethical and Legal Challenges of Artificial Intelligence-Driven Health Care. In A. Bohr, & K. Memarzadeh (Eds.), Artificial Intelligence in Healthcare (1 ed., pp. 295-336), Elsevier. https://doi.org/10.1016/B978-0-12-818438-7.00012-5 .

[6] European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts’ COM (2021) 206 final.

[7] GDPR: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5. 2016, pp. 1–88).

[8] MDR: European Parliament and Council, ‘Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on Medical Devices, Amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and Repealing Council Directives 90/385/EEC and 93/42/EEC’ OJ L 117/1.

[9] EHDS: European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on the European Health Data Space’ COM (2022) 197 final.

[10] Data Act: European Parliament and Council, ‘Regulation of the European Parliament and of the Council on Harmonised Rules on Fair Access to and Use of Data and Amending Regulation (EU) 2017/2394 and Directive (EU) 2020/1828 (Data Act)’ 2022/0047 (COD) PE-CONS 49/23 (15 November 2023).

[11] United Nations Educational, Scientific, and Cultural Organization (UNESCO), ‘Recommendation on the Ethics of Artificial Intelligence’ SHS/BIO/PI/2021/1 (2021).

[12] UNESCO, ‘Recommendation on Open Science’ SC-PCB-SPP/2021/OS/UROS (2021).

[13] See generally Porsdam, H. and Porsdam Mann, S. The Right to Science: Then and Now, Cambridge University Press (2021), Porsdam Mann, S, Porsdam, H., Schmid, M.M. et al. Scientific Freedom: The Heart of the Right to Science (in press, Rowman & Littlefield).

[14]  International Covenant on Economic, Social and Cultural Rights (adopted 16 December 1966, entered into force 3 January 1976) UNGA Res 2200A (XXI) (ICESCR) Article 15.

[15] See e..g. https://www.who.int/publications/i/item/9789240029200, and:   https://www.who.int/news/item/19-10-2023-who-outlines-considerations-for-regulation-of-artificial-intelligence-for-health

[16] Minssen, T, Solaiman, B, Wested, J, Köttering, L & Malik, A 2023, Governing AI in the European Union: Emerging Infrastructures and Regulatory Ecosystems in Health, forthcoming in 2024 in:  B Solaim

Related articles

The Daiki process enables you to use and customize language models responsibly, enhancing the safety, truthfulness, and helpfulness of LLMs.
ISO 42001 is a certifiable framework for an AI Management System (AIMS) that aims to support organizations in the responsible development, delivery, or use of AI systems.
A tale of delicate balances, trade-offs, and the need for pragmatic solutions in regulatory eco-systems
Daiki Logo

Request a demo today

Daiki Logo

Apply today

Daiki Logo

Partner with us

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join waitlist for responsible AI development