Navigating AI Governance: A Comprehensive Look at Existing and New EU and US AI Regulations

An overview of EU, US, and China AI regulations and governance initiatives, including key action points to help organizations with AI compliance.
AI

Amidst the rise of generative Artificial Intelligence (AI) models and increasing awareness of the potential risks posed by AI to individuals, organizations, and ecosystems, debates on AI governance challenges are sparking. With a new sense of urgency, countries worldwide are increasing their efforts to regulate AI, while organizations are adapting their internal governance structures and processes to comply with existing regulatory guardrails on the use of AI and to prepare for an array of new requirements on the horizon.

In this blog post, we aim to offer pointers to both existing and new EU and US AI regulations and governance initiatives. Additionally, we will provide some key action points to help organizations effectively prepare for regulatory changes and navigate the complexities of AI compliance.

Starting point: International soft-law approaches

Over the past few years, a multitude of non-binding guidelines for the ethical, trustworthy, and responsible use of AI have been adopted.

International bodies, industry organizations, governments, and expert groups promote the ethical development and use of AI technologies. Prominent examples include the OECD AI Principles, the UNESCO’s Recommendation on the Ethics of AI, or the White House Blueprint for an AI Bill of Rights.

Standardization bodies, including ISO/IECIEEE, and the U.S. NIST, offer valuable guidance as well. Notably, the U.S. NIST published its AI Risk Management Framework in January this year, expected to become a state-of-the-art AI governance tool. Another recent example of international initiatives is the launch of the AI Governance Alliance by The World Economic Forum, dedicated to responsible generative AI, this month.

These initiatives and non-binding guidelines for responsible AI governance are usually based on a common set of principles: privacy and data governance, accountability and auditability, robustness and security, transparency and explainability, fairness and non-discrimination, human oversight, and promotion of human values. They offer a framework that organizations using or developing AI can adopt independent from legal compliance obligations, to encourage the responsible and human-centric use of AI.

EU Prepares for Tougher Legal Obligations on AI

Having initially adopted the soft-law approach of ethical and responsible AI principles, the European Commission (EC) pivoted towards a comprehensive legislative strategy with the introduction of the draft AI Act in April 2021.

While a variety of existing EU law already applies to AI applications, e.g., the General Data Protection Regulation (GDPR) to personal data protection, the NIS Directive AI systems to critical infrastructure sectors, or the Medical Devices Regulation (MDR) to AI-based medical devicesthe proposed AI Act aims to be the EU’s first comprehensive horizontal, cross-sectorial regulation focusing on AI.

The Act will address fundamental rights and safety risks stemming from the development, deployment, and utilization of AI systems within the EU, with extraterritorial effect, similar to the GDPR. AI systems will be categorized into 4 categories: AI systems with unacceptable risks that will be banned, high-risk systems with specific requirements, limited-risk systems with specific transparency obligations, and low-risk systems with minimal transparency obligations.

In case of persistent non-compliance with the act, Member States (MS) will need to take appropriate actions to restrict or withdraw the high-risk AI systems from the EU market. Fines are planned to go up to 30 million euros or 6% of worldwide annual turnover.

The Act is currently in the negotiation phase, with draft versions being reviewed by the EC, the Council, and the European Parliament through trilogues. Substantial negotiations are anticipated, particularly regarding the definition of AI systems, the expansion of the list of prohibited AI systems, and the obligations for general-purpose AI and generative AI models like ChatGPT.

Next to the EU AI Act, two other initiatives are part of the EU Strategy for AI. The AI Liability Directive will establish legal and financial accountability for harms resulting from AI systems. Additionally, a revision of sectoral safety legislation, including for machinery and general product safety, is underway.

Uncertain Future: AI in the U.S. Governed by a Patchwork of Sectoral Laws

In the US, the absence of a prominent legislative AI initiative like the EU’s AI Act might create the impression of a lack of AI regulation and regulatory initiatives. However, the US Senate and House have actively engaged in numerous hearings to understand the technology’s nuances, risks, benefits, and potential regulatory approaches. Additionally, lawmakers have introduced several proposals to address AI-related concerns.

In general, the current U.S. approach to AI governance is tailored to specific sectors and widely dispersed across various federal agencies.

The sectoral approach takes center stage with the active involvement of the Federal Trade Commission (FTC), wielding its authority to protect against “unfair and deceptive” practices related to AI systems.

The FTC’s strong commitment to enforcing the fair use of AI was demonstrated by opening an investigation into OpenAI’s chatbot ChatGPT only earlier this month. This scrutiny becomes even more significant in light of the first evaluation of foundation model providers for their compliance with the proposed EU AI Act, conducted by Stanford University’s Center for Research on Foundation Models in June. The study highlighted critical compliance challenges, ranging from copyright issues to risk mitigation and model evaluation and testing.

Besides the FTC, the Equal Employment Opportunity Commission (EEOC) is another example of a very active sectoral regulator for AI. The EEOC can impose transparency requirements for AI, demand a non-AI alternative for individuals with disabilities, and enforce non-discrimination in AI hiring. Furthermore, the Consumer Financial Protection Bureau (CFPB) mandates explanations for credit denials from AI systems and has the potential to enforce non-discrimination requirements.

Apart from these sectoral approaches on a federal level, states’ interest in regulating AI services and products is on the rise. For example, California has introduced AB 331, a law specifically targeting automated decision tools, including AI, mandating developers and users to submit annual impact assessments.

In light of these dynamics, public discourse has sparked over new AI regulation, involving key stakeholders, such as Big Tech, civil society, expert bodies as well as federal and state entities.

While some industry voices call for the creation of a new federal agency dedicated to safeguarding individuals from the risks associated with AI, others support bolstering the sectoral approach, arguing that it would allow for tailoring the key principles of responsible AI to specific sectors.

For example, advising the US President and the White House National AI Initiative Office (NAIIO), the US National Artificial Intelligence Advisory Committee (NAIAC) has recently emphasized the need to ensure elevation of AI leadership within federal agencies, and coordination between them. By further embracing a sectoral approach, the U.S. would be following the example of the UK strategy, which plans to empower existing oversight bodies instead of creating a new single AI regulator.

Other industry voices see an opportunity for a broader approach in incorporating requirements for AI in a comprehensive national privacy law, like the proposed American Data Privacy and Protection Act. Such legislation could oblige large data holders to assess their algorithms annually and submit annual algorithmic impact assessments to the FTC, further strengthening its competency for consumer protection.

In addition to actively governing algorithms, China recently introduced AI regulations targeting generative AI, prohibiting the generation of fake news including deepfakes, and requiring synthetically generated content to be labeled.

Tips for Navigating the Evolving Regulatory Landscape

As the evolution of AI technologies progresses, current and upcoming regulatory requirements for AI are set to establish a well-regulated and transparent AI landscape with wide-ranging global implications. For companies utilizing AI across various industries, this brings substantial compliance challenges which need to be proactively addressed. With AI regulation already in force and new ones in progress, it’s not a matter of if, but only when new requirements will be introduced.

To prepare, there are several steps organizations should consider, starting with conducting a comprehensive survey of their business operations to identify AI usage in decision-making processes and assessing associated risks.

It is helpful for all future compliance requirements to build organizational AI governance programs throughout the organization, focusing on policies and operationalization of transparency, accountability, fairness, data integrity, accuracy, and social impact. This can be achieved by designating a responsible individual, such as a chief privacy officer or chief AI officer, to oversee AI-related policies, and establishing an active AI governance working group or AI ethics board that encompasses all stakeholders in a multi-actor responsible AI ecosystem.

Those AI governance policies should be aligned with existing risk management practices and awareness programs across various business units, ideally operationalized with software-supported processes and governance systems.

These are just a few examples of the steps companies can take to ensure readiness for upcoming regulatory changes and foster a culture of responsible AI usage throughout your organization. By embracing these measures, organizations can confidently stride into the future of AI, where responsible and compliant practices pave the way for innovation and societal benefit.

Related articles

While pre-built models can work for many use cases, private RAG and LLM systems guarantee data protection and ownership, making them relevant for organizations seeking to implement AI in a secure way.
The Daiki process enables you to use and customize language models responsibly, enhancing the safety, truthfulness, and helpfulness of LLMs.
ISO 42001 is a certifiable framework for an AI Management System (AIMS) that aims to support organizations in the responsible development, delivery, or use of AI systems.
Daiki Logo

Request a demo today

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Apply today

Daiki Logo

Partner with us

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join waitlist for responsible AI development