Daiki: Ethical AI by design

AI is a fascinating technology. We at Daiki are helping to develop and use it responsibly.

Artificial Intelligence is fundamentally changing the way we work and live. This process is in full swing and will only increase in speed and intensity. We have been working in Machine Learning for many years. Not much time has passed since AI has gone from a pure research field to a conversation starter and door opener – and now to a new “AI Summer”.

To be sustainably successful and help people, AI and AI development must be seen as more than just technology. Responsible AI is an enabler for lasting success.

Disregarding AI ethics can be very costly – figuratively and literally.

“Alphabet Inc lost $100 billion in market value on Wednesday after its new chatbot shared inaccurate information in a promotional video and a company event failed to dazzle, feeding worries that the Google parent is losing ground to rival Microsoft Corp.” (Reuters)

Reputational risks are real and not always easy to deal with. And they’re not the only risks of a technology whose instantiations are clearly being released unfinished and rushed even by one of the world’s biggest tech companies.

AI can harm people. And it can help people. The same company that put such a serious dent in their stock value with a rushed release of a video of their new voice assistant only recently made machine learning models freely available that will, among other things, help medical research tremendously.

AI is a fascinating technology. We at Daiki are helping to develop and use it responsibly.

AI on the rise

The new AI summer is here to stay. The recent large language and image models are about to transform workflows that are primarily about content creation. (The only reason my kids don’t do their text homework with ChatGPT and DeepL already is because of their limited access to devices – thanks to the iOS family settings, buggy, but still a transparent fence.)

Not so long ago, potential customers had to be convinced of the usefulness of AI systems. Today, to actually use AI in a meaningful way, the persuasion effort focuses primarily on the limitations of AI.

Current models can solve specific tasks very well. The generation of artificial images and videos is already so good that for example game developers or filmmakers will soon be able to largely avoid art and cameras. The current generation of those generative models can already be successfully used wherever low quality and high output is required. Welcome AI-generated (fake) news, press articles, comedy shows and soap operas (in this case probably with an increase in quality).

Yet these models, as great as they are, are based on relatively simple mathematical-algorithmic concepts. Their impressive capabilities are largely based on statistical modeling that, simply put, uses vast amounts of data to predict which new data (images, text) are most likely to match presented ones.

They are far from universal applicability or human-like intelligence (however one understands this term exactly). What is impressive is not so much their technological progress (which is undoubtedly tremendous), but the realization of how far-reaching their applications already are. One can only imagine what is to come in the near future.

All of this raises important questions about the responsible use of AI systems. The companies behind many of the current Large Language and Diffusion Models already have their own responsible AI teams. (Admittedly, what happens to the outputs of these teams largely remains a mystery in light of current events). Many startups are emerging to address this issue, offering tools for explainable AI, for example.

These are good and important developments. But it’s not enough. Ethical AI can only succeed when developers of AI systems collaborate with experts in ethics, law, and design, and with the users and stakeholders of their AI solutions.

If AI is being developed without embedded values risks to users and society remain high. They range from reputational issues to business challenges (only trusted AI systems that are accepted by users will be used) and even threats to human dignity and societal values. Responsible and trustworthy AI systems developed and maintained by multidisciplinary teams are important and have clear competitive advantages. Trustworthy AI by design protects human rights, abides to the imminent EU AI Act, helping to prevent harm and enable sustainable use of this fascinating technology.

Daiki: Holistic and interdisciplinary

We founded Daiki to help teams develop and deploy responsible AI. We believe AI is an exciting technology, and we want to encourage its use, not prevent it. We are a team and growing community of machine learning developers, legal and philosophical experts, designers, and software developers that helps other teams build and deploy trustworthy AI in concrete, practical ways.

The idea for Daiki arose two years ago in a joint research project with the Faculty of Philosophy at the University of Vienna. Along with law, technology and design, philosophy is one of the four pillars of Daiki.

Ethics is often considered subjective. Whether something is good or bad is viewed as an individual, ultimately arbitrary view. This is wrong. Ethics, as a sub-discipline of philosophy, is a science, and offers objectively comprehensible methods to investigate ethical questions and to arrive at ethical principles and statements.

The best possible application and (further) development of these methods requires professional expertise. If you are sick, you should not go to faith healers, if you have a legal problem, you should not search Reddit, and if you want to develop ethical AI, it is not a good idea to open a bottle of wine with only your data science colleagues.

Getting to the bottom of colleagues’ opinions can be interesting (or scary), but to deal with issues of AI ethics in a serious way, you need people who are trained in the field of AI ethics.

When talking to philosophers and reading their texts, the proposed methods or principles are often quite abstract. Therefore, in our research and practical work at Daiki, we try to make these principles concretely applicable: by embedding them in the development or integration process of AI systems.

Medical ethics

Healthcare and medicine can provide useful blueprints as a starting point for implementing practical AI ethics. There, medical ethics has been applied in practice for a long time. Legal scholars and lawyers, together with medical professionals and ethicists, have done great work translating principles from medical ethics into clear and applicable regulations.

For one of our first customers, we at Daiki have, among other things, completed ISO 13485 certification for the application of a management system for the development of software as a medical device. Many requirements for development and operations processes in this domain apply in the same way or similarly, to AI.

Compliance is becoming an important requirement for the development and operation of AI systems, and not just because of the upcoming EU AI regulation.

There is a growing body of academic literature that attempts to bridge the gap between principles and concrete applications. In their framework for trustworthy AI, Floridi and Cowls, for example, take medical ethics as a starting point and extend the existing principles of beneficence (doing good, promoting well-being, preserving dignity, sustaining the planet), non-maleficence (do not harm, privacy, security), autonomy (balancing human and artificial autonomy and decision-making) and justice (non-discriminatory, preserving solidarity, avoiding unfairness) with explicability (how does it work, who is responsible?).

Medical ethics and medical device regulation is therefore a good starting point, but it is not sufficient to meet the unique challenges of AI. Even explicability is initially only an umbrella term and abstract principle that must be implemented concretely and embedded in processes.

Daiki’s approach

For good AI to succeed, it needs at least four pillars: Laws and regulatory requirements, ethical analysis, responsible AI (RAI) in the technical sense, and user-centric design involving AI users. We are fortunate to represent these areas in our Daiki founding team. And since AI systems must always be considered in the context of their application, and in fact resemble more a service than a static product, it is important to keep applying this interdisciplinary work in a concrete way and to keep developing it.

Therefore, we are looking forward to communicating our approach, building a community, and continuously supporting our customers on their way to AI for good with trustworthy AI by design.

Subscribe to our newsletter

Don’t miss out on the opportunity to be the first to know about our exciting updates. Simply enter your email address below and hit the subscribe button to join our newsletter today!

Related articles

By counteracting present challenges, we can pave the way for tackling future existential threats.
Responsible AI demands a dynamic and thoughtful approach, guided by moral reasoning and continuous ethical discourse. Is it reachable?
United Nations calls for rules for AI and ensuring that AI serves the common good.
Daiki Logo

Let's talk about responsible AI development

Daiki Logo

Partner with us

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join waitlist for responsible AI development