Daiki logo blue
Picture of Wolfgang Groß
Wolfgang Groß
Chief AI Officer

Working on AI Safety Today

By counteracting present challenges, we can pave the way for tackling future existential threats.

Following the amazing breakthroughs of diffusion models and large language models, the discussion around AI safety and existential threats has gotten a lot of attention recently. Leading industry experts asked for governmental regulation and some of the most renowned researchers have also voiced their concerns. 

The discussion around AI safety often focuses on existential risks of future super intelligent AI. Although it’s important to talk about and work on long-term AI risks, we should also focus on today’s AI problems. For many practitioners and users of AI, this is much more tangible and more feasible to integrate into their current projects.

We are much in favor of this movement towards better and safer AI. Daiki’s products and services are built to solve today’s AI challenges and risks. In general, we share the concerns about long-term AI safety and work on this problem by focusing on today’s AI risks. 

An argument that is missing, however, is that it’s not clear why working on today’s AI risks helps prevent future AI risks. 

The threat scenarios of future AI are, for example, built around the control problem and power-seeking AI systems. However, these are threats that assume super-intelligent capabilities and are thus not applicable to current AI technologies. 

If we want to prevent an AI catastrophe, why should we be bothered with today’s AI risks? To close this gap, we want to argue here in favor of why working on current AI risks is beneficial – but not sufficient – for future AI safety.

We will discuss this along three perspectives: AI developer buy-in, AI system quality, and regulatory participation.

AI Developer Buy-In

Even with the best safety tools available, we need practitioners to use them. Today, there are more AI transparency and safety tools available than people make use of. One reason for this could be a lack of awareness, lack of benefits, or missing regulation. 

If we get practitioners to make AI safety part of their identity and set up respective structures now, it’s more likely that highly relevant safety measures will be accepted in the future. 

Safe and responsible AI development is a set of tools but also a way of working on AI. We want to set up the environment to get as many actors as possible on board early; this helps the acceptance of more important safety measures in the future.

AI System Quality

But why should practitioners and users of AI want safety in the first place? There are some settings where customers are willing to pay a premium for safety. For example, many parents are willing to pay extra for more safety features in car seats for their children. 

In other domains, like sports cars, people are more willing to pay extra for more horsepower than for better brakes (although you would probably be able to go faster with better brakes). In this analogy, AI tools are more similar to a sports car than a car seat.

Customers want more capabilities instead of safety, and they want it quickly. However, this way of thinking is short-sighted; AI is special in this regard. Safety is a common confounder for many of the capabilities we are looking for. 

For example, a model that is able to generalize better would be safer but also more capable. A model that is more robust against out-of-distribution samples is safer but is also more user-friendly, as it is more in line with users’ expectations. In addition, a more rigorous process in developing, documenting, and deploying AI models makes the process not only safer, but also more efficient.

Promoting AI safety to practitioners is useful and practical, not only to get their buy-in but also to enable organizations to make AI safety part of their identity, further promoting AI safety in the long run. This also helps researchers and tool developers to build tools that are actually needed and wanted by practitioners. Safety first can not come at the expense of weaker models, and fortunately, it doesn’t have to.

Regulatory Participation

Gary Marcus wrote a recent blog post on the importance of regulation for AI. However, he also points out how it could go wrong if only a small group of people promote their interests. As regulatory frameworks are written right now, this is a topic for today’s AI practitioners as well and will have a huge effect on the future of AI.

In the early days of the internet economy, many large market participants missed their chance to shape the rules because they lacked the understanding of the impact this technology would have on their business. We ended up with an advertisement-driven internet economy, and many voices were left out.

Taking a position on AI safety today for your business and your sector is beneficial for AI safety to get a broader perspective and also for market participants. The role of people promoting AI safety, as we do at Daiki, is to ensure that AI practitioners know how the proposed regulations will affect them and how they can voice their concerns.

Regardless of how seriously you take the existential threat of AI, we think there are clear arguments for investing in AI safety today, as these efforts will ultimately contribute to the cause of future AI safety.

Subscribe to our newsletter

Don’t miss out on the opportunity to be the first to know about our exciting updates. Simply enter your email address below and hit the subscribe button to join our newsletter today!

Related articles

The Stanford Center for Responsible Quantum Technology aims to connect the quantum community to explore how to balance maximizing the benefits and mitigating the risks of a new class of applied quantum technologies
Given the complexity of AI projects, it's important to have a strategic approach when embarking on your first AI project to ensure that you're set up for success
The United Nations General Assembly adopted a resolution advocating for trustworthy AI systems, offering a good basis for continued international policy on AI governance
Daiki Logo

Apply today

Daiki Logo

Book Demo Today

Daiki Logo

Partner with us

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join waitlist for responsible AI development