Mark Coeckelbergh
Mark Coeckelbergh
AI Ethics Global Lead

AI Ethics: From Principles to Practice

Artificial intelligence raises many ethical and societal issues.

Artificial intelligence raises many ethical and societal issues. (For an overview, see my book AI Ethics and The Political Philosophy of AI). Consider for example questions about responsibility and AI in health care, challenges related to the future work when AI is used in industry, or bias in texts written by chatbots. It is important to ensure human oversight and accountability, safety, privacy, transparency, fairness, and sustainability.

In response to calls for AI ethics by experts across sectors, today there is more discussion about these issues – in the private sector, in the media, and in policy. For example, the European Commission has published ethics guidelines for trustworthy AI in a document prepared by the High-Level Expert Group on AI and later has proposed regulation with rules on artificial intelligence.

But due to the speed of technological development in this area legislation often comes too late, and talking about principles and ethics at a general level is not enough. Now is the time for implementing ethics in the technology. Since the technology is still in development, we now have a unique chance to shape the technology in ways that are ethically and politically responsible and trustworthy.

This project can be helped by means of regulation, but it also requires more research and private initiatives and investment. Next to projects by big tech firms themselves, there is a need for a broad spectrum of players in this area and for new, innovative approaches that link to all kinds of places and entities where the technology is developed – not just the Google’s and OpenAI’s of this world.

Moreover, for this to succeed, we need properly interdisciplinary teams with AI developers and designers but also ethical and legal experts. We also need to involve users and other stakeholders.

Daiki responds to these needs and offers software products, consultancy services, and opportunities for community building that embeds principles in the development process of AI systems. We are starting with the medical sector, but will expand to other areas as well.

AI ethics is much needed. Without it, things cannot only go terribly wrong from a financial and business perspective; we also miss an opportunity to contribute to more ethically sustainable technology and, ultimately, to a better world.

Subscribe to our newsletter

Don’t miss out on the opportunity to be the first to know about our exciting updates. Simply enter your email address below and hit the subscribe button to join our newsletter today!

Related articles

The Daiki process enables you to use and customize language models responsibly, enhancing the safety, truthfulness, and helpfulness of LLMs.
ISO 42001 is a certifiable framework for an AI Management System (AIMS) that aims to support organizations in the responsible development, delivery, or use of AI systems.
A tale of delicate balances, trade-offs, and the need for pragmatic solutions in regulatory eco-systems
Daiki Logo

Request a demo today

Daiki Logo

Apply today

Daiki Logo

Partner with us

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join waitlist for responsible AI development