The EU AI Act: A Comprehensive Framework for Ethical AI

Blog Author
Hanna Karbowski
Sep 1, 2024
featured image

Recognizing the need to regulate AI, the European Union (EU) has proposed a groundbreaking legislation known as the EU AI Act. This pioneering law, the first of its kind by a major regulatory body, aims to establish a comprehensive framework for the ethical and responsible use of AI. 

 

Classifying AI Applications: Three Risk Categories

 

The EU AI Act classifies AI applications into three distinct risk categories.

  1. The act will outright ban the applications that pose an unacceptable risk, such as the government-run social scoring systems seen in certain regions. These systems, notorious for their invasive nature, will not find a place in the European market under the new law.
  2. High-risk applications like CV-scanning tools used to rank job applicants are subject to specific legal requirements. The legislation seeks to ensure that these systems adhere to stringent standards to protect individuals from potential biases and discrimination.
  3. Applications that are neither banned nor classified as high-risk are left relatively unregulated, allowing for innovation and growth in the AI industry.

The Influence of AI on Our Lives: Why You Should Care

 

Why should you, as an individual, care about the EU AI Act? The answer lies in the far-reaching influence of AI applications on our lives. AI algorithms determine the information we encounter online, shape law enforcement practices by capturing and analyzing data from facial recognition systems, and even personalize advertisements to cater to our preferences. In essence, AI has a profound impact on multiple aspects of our daily experiences.

 

Who Does the EU AI Act Apply to?

Similar to the EU's groundbreaking General Data Protection Regulation (GDPR) of 2018, the EU AI Act has the potential to become a global standard. Its provisions will define the extent to which AI technologies have a positive rather than a negative impact on individuals, regardless of their geographical location. Already making waves internationally, the EU's AI regulation has prompted countries to pass bills establishing legal frameworks for artificial intelligence, thus paving the way for responsible AI practices across the globe.

 

But who exactly does the EU AI Act apply to? The legal framework encompasses both public and private actors, regardless of their location, as long as their AI systems are placed on the Union market or affect individuals within the EU. This means that both providers, such as developers of AI systems, and users of high-risk AI systems, like a bank utilizing a CV screening tool, are subject to the regulations. However, it's important to note that the act does not apply to private, non-professional uses of AI.

 

Roles Defined in the AI Act

To better understand the roles defined in the AI Act, let's take a closer look.

Providers are individuals, public authorities, agencies, or other entities involved in developing AI systems or having them developed for market placement or service provision under their own name or trademark. Importers, on the other hand, refer to those established in the EU who introduce AI systems bearing the name or trademark of a natural or legal person outside the Union. Lastly, distributors, distinct from providers or importers, are responsible for making AI systems available on the Union market without altering their properties.  

 

Implications for Companies Domiciled in Third Countries

 

Swiss companies and organizations operating in third countries should also pay close attention to the implications of the AI Act. Despite not having a legal presence in the EU, these entities will likely face significant impacts. Similar to the extraterritorial effect of the GDPR, the AI Act extends its reach beyond EU borders. It applies to providers placing AI systems on the EU market, users of AI systems located in the EU, and providers and users in third countries if the output produced by the AI system is used within the EU.

Consequently, Swiss companies utilizing AI systems for activities like credit checks on individuals within the EU will fall under the scope of the AI Act.

 

Penalties for Non-Conformance and Enforcement

 

One of the key concerns for businesses is the penalties for non-conformance with the AI Act.

The penalties outlined in the legislation closely mirror those established in the GDPR. They aim to be effective, proportionate, and dissuasive. The severity of the sanctions varies depending on the violation. Non-compliance with prohibited AI practices or data and data governance obligations for high-risk AI systems can result in penalties of up to €30 million or 6% of the total worldwide turnover in the preceding financial year, whichever is higher.

Non-compliance with other requirements under the AI Act carries penalties of up to €20 million or 4% of total worldwide turnover. Supplying incomplete, incorrect, or false information to regulatory bodies can lead to penalties of up to €10 million or 2% of total worldwide turnover.

It is crucial to note that enforcement of the AI Act falls under the jurisdiction of competent national authorities. Additionally, individuals who are largely affected by AI systems may have the right to address issues such as privacy violations or discrimination.

 

Timeline and Outlook For Implementation EU AI ACT

 

When can we expect the EU AI Act to come into force? Although there is no concrete deadline, industry experts anticipate its passage in the near future. The Act is currently undergoing discussions among parliamentarians, and once consensus is reached, a trilogue will be held involving representatives from the European Parliament, the Council of the European Union, and the European Commission. After finalizing the terms, affected companies and individuals will have a grace period of approximately two years, allowing them to review and comply with the new regulations.

 

Shaping the Future of AI: A Global Standard

 

The EU AI Act represents a significant step forward in shaping the future of AI technology. With its comprehensive approach to risk classification, broad applicability, and stringent penalties for non-conformance, it aims to foster responsible AI practices while safeguarding the rights and well-being of individuals. As the world eagerly awaits the implementation of this groundbreaking legislation, it is evident that the EU is setting the stage for a global standard in AI regulation.