On June 29, Daiki, in collaboration with the University of Vienna and EUTEMA held its second EKIP workshop focused on the ethics of AI and how it can be applied to real-world projects.
After an introduction to AI ethics by Mark Coeckelbergh, which put emphasis on issues concerning automation and responsibility next to political challenges and the problem that regulation is slower than technological development, Erich Prem presented a review of Ethical AI frameworks and offered an interdisciplinary overview of some challenges in the sector, for example how to deal with the possibility of prediction of individual risk in insurance.
Thomas Doms from TÜV Austria then talked about the opportunities and challenges for the certification of AI. TÜV Austria already offers AI certification with functional inspection of AI including a definition of the technical distribution of the application, the risk-based minimum performance requirements, and statistically valid testing based on independent random samples.
In the discussion participants agreed that current generative AI is difficult to certify, given that its functionality is relatively open as opposed to, say, a car.
Peter Melicharek also gave input from the legal side, for example when we talked about the current AI Act at the European level highlighting the importance of setting minimum standards to protect EU citizens.
Finally, Stephen Michael Impink presented insights into the entrepreneurial ecosystem, pointing to the dominance of big tech companies, which also tend to take the lead when it comes to AI ethics. There are also interesting issues given cultural differences, for example between the U.S. and Europe.
Concluding the seminar we discussed possible future directions for the practical integration of AI ethics into today’s processes. As with the format of the Ekip seminar, an interdisciplinary approach is of great value.
The origins of Daiki come from EKIP, a FFG-funded project in which Gradient Zero collaborated with Professor Mark Coeckelbergh from the University of Vienna and the the Austrian Research Institute for Artificial Intelligence to put ethical issues front center in AI development.
The first EKIP seminar was held in 2022, and focused on the practical application of ethical AI using a case study in medical diagnostics. The workshop explored the concepts of explainability and responsibility in relation to AI systems and emphasized the need for AI systems to provide explanations that can be understood by doctors, patients, specialists, and other stakeholders.
Other key takeaways:
- The development and deployment of AI systems should incorporate ethical requirements and involve ongoing dialogue and collaboration among philosophers, computer scientists, designers, psychologists, users, and patients.
- Trustworthy AI should be seen as an ongoing service that requires continuous support, monitoring, adjustment, explanation, and further development throughout the ML-Ops cycle.
Overall, the workshop highlighted the importance of explainability, continuous improvement, and stakeholder collaboration in the development and application of ethical AI systems.