Daiki logo blue
Picture of Eline de Jong
Eline de Jong
PhD candidate, AI Ethics Expert

Responsible AI – sure, but how?

Responsible AI demands a dynamic and thoughtful approach, guided by moral reasoning and continuous ethical discourse. Is it reachable?

The phrase “With great power comes great responsibility” holds particular relevance in the realm of Artificial Intelligence (AI). As AI technology continues to evolve and influence various aspects of human life, developers and users must approach its design and deployment with careful consideration. Unlike other technologies, AI’s capacity to participate in decision-making processes with some degree of autonomy necessitates a heightened sense of responsibility. This post delves into the notion of responsible AI, emphasizing the dynamic and thoughtful approach required to ensure its ethical use. Sure – but how?

A dynamic approach to AI ethics

There is a growing awareness that there is something at stake when we take up AI. Morally or commercially motivated – or somewhere in between – the idea of ‘responsible AI’ is gaining traction. However, defining responsible AI is a complex endeavour, as it lacks a singular interpretation. Hence the numerous attempts to put flesh on the bones of this thin concept. Often, these attempts to elucidate this concept result in lists of values, such as autonomy, justice, and privacy, coupled with guidelines on how to uphold them. While ensuring human oversight over AI decisions is one example of such a guiding principle, it may not always lead to the most responsible outcome. For instance, if an AI system outperforms human experts in detecting medical conditions, a debate arises about granting the AI the final say.

This is open for discussion – and that is exactly the point. Ethics is discussion, not a checklist. Rather, it is a mode of inquiry – one that uses moral values as a perspective to assess a situation or practice. That is what makes it such a challenging task to think about what responsible AI amounts to: It necessitates critical thinking and reasoned deliberation. There is no algorithm for establishing responsible AI.

Drawing inspiration from Aristotelian ethics, a valuable perspective emerges for comprehending responsible AI. Aristotle emphasized the contextual nature of ethics, where the same action may have different moral implications based on the circumstances. He advocated for developing virtuous characters capable of discerning the right course of action in specific situations, rather than rigidly adhering to predefined rules.

Likewise, responsible AI requires a dynamic approach, acknowledging that what is considered right is not fixed. Moreover, the general-purpose character of AI escapes every attempt to establish a rules-based ethics. Instead, responsible AI calls for a commitment to do good and a dedicated effort to reason about what exactly good is.

Five commandments of a responsible approach to AI

Often, ethical discussions about AI revolve around moral dilemmas that AI systems can be confronted with. For example, if an autonomous vehicle whose brakes miraculously do not respond anymore, approaches a crossing and has the option of either running through a green traffic light hitting a mother and child, or turn left and run through a red traffic light hitting an elder person – what should it (be programmed to) do? 

This moral dilemma, famously referred to as “the trolley-problem”, is a beautiful tool for thought to explicate moral considerations. But misleadingly, the point of this problem is not so much about its solution – it is more about the problem itself. In fact, we are being harsher on technology than we are on humans when expecting AI to perfectly navigate such dilemmas. If one thing the trolley-problem demonstrates, it is – again – that ethics is about reasoning. Instead of taking responsible AI to be the result of solving moral dilemmas, we could work more structurally on responsible AI by improving our capacity to reason about such cases.

When I worked at the Netherlands Scientific Council for Government Policy, we identified five historical lessons about embedding new technologies like AI within society: We should work on our understanding of the technology (Demystification), create a facilitating socio-technical ecosystem (Contextualisation), stimulate the involvement of societal actors (Engagement), develop guiding frameworks (Regulation), and strategically think about our place within the international network (Positioning). These lessons can be rephrased as five commandments that guide ethical reasoning and foster responsible AI:

  1. Make sure you know what you are talking about: A foundational grasp of the technology is indispensable for recognising morally significant facets. Meaningful ethical discussion thus necessitates a fundamental comprehension of how the technology works and what it is capable of.
  2. Consider human-machine interaction: Ethical quandaries frequently manifest in the interplay between technology and users. To adeptly identify and address these dilemmas, it is imperative to avoid an overfocus on the technology and adopt a holistic perspective that encompasses the dynamics of human-machine interaction. 
  3. Involve the Radical Other: Acknowledging our inherent biases, we must actively seek perspectives beyond our own. Actively soliciting input from a multitude of viewpoints – for example through upstream stakeholder engagement – is pivotal for nurturing the ethical dialogue.
  4. Navigate trends: Beyond immediate AI effects, its deployment triggers indirect consequences. Ethical assessment of AI mandates an expansion of focus from direct implications to encompass the sweeping trends it catalyses.
  5. Take a network perspective: AI’s deployment is inherently an interconnected and international affair. Engaging in ethical discussions about AI therefore demands considering the broader network within which we operate.

Responsible AI demands a thoughtful and dynamic approach to ethical decision-making. Acknowledging the significance of AI’s impact on society and individual lives, developers and users must strive to reason from a moral perspective and navigate the complexities of AI ethics. By adhering to the five commandments and continuously engaging in ethical discussions, humankind can develop a virtuous character that guides AI towards positive and socially beneficial outcomes. Only by embracing this responsibility can we harness the full potential of AI for good.

Subscribe to our newsletter

Don’t miss out on the opportunity to be the first to know about our exciting updates. Simply enter your email address below and hit the subscribe button to join our newsletter today!

Related articles

The Stanford Center for Responsible Quantum Technology aims to connect the quantum community to explore how to balance maximizing the benefits and mitigating the risks of a new class of applied quantum technologies
Given the complexity of AI projects, it's important to have a strategic approach when embarking on your first AI project to ensure that you're set up for success
The United Nations General Assembly adopted a resolution advocating for trustworthy AI systems, offering a good basis for continued international policy on AI governance
Daiki Logo

Apply today

Daiki Logo

Book Demo Today

Daiki Logo

Partner with us

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join waitlist for responsible AI development