top of page

EU AI ACT

WHAT IS THE EU AI ACT?

On 13 March this year, the European Parliament passed the Artificial Intelligence Act aka the EU AI Act, marking the establishment of the world’s first comprehensive horizontal legal framework for AI. This act introduces EU-wide regulations on data quality, transparency, human oversight, and accountability. The EU AI Act is set to significantly influence many companies operating within the European Union and beyond, particularly those developing and deploying AI models. Those infringing the Act face penalties of up to €35 million or 7% of global annual revenue, whichever is higher. Mark Child, a co-founder and a Director at Cyber London, says:

 

“This landmark law, the first of its kind in the world, addresses a global technological challenge, that also creates opportunities for our societies and economies. The AI Act classifies different types of AI systems based on risk levels, imposes transparency obligations, and prohibits certain practices. It also establishes governance architecture to enforce common rules across the EU. Overall, it sets a global standard for AI regulation whilst emphasising fundamental rights and emphasises the importance of trust, transparency and accountability when dealing with new technologies, while at the same time ensuring this fast-paced technology can flourish and boost innovation.”

 

BACKGROUND TO THE EU AI ACT

 

The European Commission first proposed an EU AI Act in April 2021. Following extensive negotiations with the European Parliament and the Council of the European Union, a political agreement was reached in December 2023. The legislative process is nearly complete with the European Parliament's recent vote in March. The AI Act will take effect 20 days after publication in the Official Journal, sometime this month. Most of its provisions will become applicable 24 months after the Act comes into force. However, provisions related to prohibited AI systems will take effect after six months, and those regarding generative AI, like ChatGPT, will apply after 12 months. While these timelines may seem generous, many affected entities may need to redesign their products and services significantly and should begin these processes as soon as possible. This also applies to non-AI companies, which need to understand the rapidly evolving technology and establish their own risk thresholds to ensure effective compliance.

 

HOW DOES THE EU DEFINE AI?

 

The Czech Presidency proposed a new version of the definition of artificial intelligence focusing on systems developed through machine learning (ML) techniques and knowledge-based approaches. As such, the current EU definition of AI reads:

 

“Artificial intelligence system’ (AI system) means a system that is designed to operate with a certain level of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of human-defined objectives using machine learning and/or logic- and knowledge-based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts.”

 

AI LEVELS OF RISK

 

The EU AI Act adopts a risk-based approach, categorising AI systems into four risk levels: unacceptable, high, limited, and minimal or no risk. Each category has corresponding regulatory requirements to ensure that the level of oversight is appropriate to the risk level.

  • Unacceptable risk – these are AI systems that pose significant threats to fundamental rights, democratic processes, and societal values, potentially compromising critical infrastructures and causing serious incidents. The EU AI Act prohibits the use of these systems

  • High risk – the EU AI Act focuses on these systems, particularly those used in critical sectors like healthcare, transportation, and law enforcement. These systems undergo strict assessments to ensure accuracy, robustness, and cybersecurity, with heavy regulation to mitigate potential risks. The Act mandates human oversight to ensure accountability and adds a layer of safety and security.

  • Limited risk – AI systems categorised as limited risk face fewer regulatory constraints than high-risk systems. However, they must still meet specific transparency obligations to ensure accountability and trustworthiness in their deployment.

  • Minimal risk – these are applications such as AI-powered video games and spam filters. For these systems, the Act aims to reduce regulatory burdens and encourage innovation and development in low-risk AI technologies.

 

Mark added:

“The world’s first binding law on AI will help to reduce risks, create opportunities, and combat discrimination. Unacceptable AI practices will be banned in Europe, and the rights of workers and citizens will be protected.”

 

WHAT ARE THE OBLIGATIONS?

 

The EU AI Act has slightly different implications for micro, medium-sized and large businesses. However, there is some common ground in terms of obligations. Under the EU AI Act, you must ensure high-risk AI systems meet mandatory requirements for:

 

  • Risk management

  • Data governance

  • Technical documentation

  • Transparency

 

You must assess AI systems to prevent unacceptable risks and protect fundamental rights. Comprehensive documentation and records of AI system assessments, risk management measures, and compliance efforts need to be maintained. Also, if your organisation operates within the EU but is based outside it, you must appoint an authorised representative within the EU to ensure compliance with the regulation. You should also keep abreast of the latest developments, guidelines, and templates provided by the AI Office and other relevant EU bodies to ensure ongoing compliance. The EU also recommends that you participate in awareness-raising and training activities related to the application of the Act.

 

WHAT IS THE IMPACT OF THE ACT ON THE UK?

 

According to City A.M., the EU AI Act will significantly impact UK businesses, requiring compliance for those engaged in international trade, akin to companies in the United States and Asia. Any UK business selling AI systems in the European market or deploying AI systems within the EU will be affected. Businesses must establish and maintain robust AI governance programs to ensure compliance. Enza Iannopollo, principal analyst at Forrester, has this to say:

“Over time, at least some of the work UK firms undertake to be compliant with the EU AI Act will become part of their overall AI governance strategy, regardless of UK specific requirements – or lack thereof.”

 

WHAT HAPPENS NEXT?

 

The EU AI Act's phased implementation timeline is as follows:

 

  • Late 2024 - Ban on prohibited AI systems takes effect. This includes uses like subliminal techniques, exploiting vulnerabilities, biometric categorisation, social scoring, predictive policing, and certain facial or emotion recognition systems.

  • Mid-2025 - Obligations for general-purpose AI (GPAI) governance begin. GPAI systems must meet requirements for technical documentation, copyright compliance, dataset summaries, and labelling AI-generated content. Systems with "systemic risk" face additional requirements.

  • Late 2026 - The AI Act becomes fully applicable, including obligations for high-risk AI systems, pre-market conformity assessments, and post-market monitoring.

  • Late 2027 - The Act applies to products already needing third-party conformity assessments, like medical devices and toys, with existing sector-specific regulators enforcing compliance.

 

CYBER LONDON AND AI

 

Mark Child reports that generative AI has made phishing easier than ever for cybercriminals. Phishing involves tricking people into revealing sensitive information that can be used maliciously. ChatGPT has been integrated into spam-generating services, allowing criminals to translate or improve messages sent to victims. ChatGPT has also coincided with a significant increase in phishing emails. Moreover, AI tools reduce the need for human involvement in various aspects of cybercriminal organisations. For instance, software development, scams, and extortions can be automated, decreasing the need for recruiting new members and lowering operational costs. Criminals can achieve more efficiency, sophistication and scalability when evading detection and attribution. While AI offers immense potential for positive impact, it also presents risks when misused by criminals. Striking a balance between innovation and security remains crucial in the age of AI. Cyber London offers resources, discussions, and research to help you navigate the intersection of AI and cybersecurity and remain compliant with the EU AI Act.


0 comments

Recent Posts

See All

Comments


bottom of page