Home

  –

How businesses can prepare for EU AI regulation

How businesses can prepare for EU AI regulation

European Union plans to regulate artificial intelligence are nothing to be feared. But they cannot be ignored.

The EU has approved the draft text of legislation to enact the world’s first comprehensive law on the use and abuse of artificial intelligence (AI). The EU Artificial Intelligence Act is designed to protect EU citizens from abuse by taking a “risk-based approach” to AI, explicitly prohibiting systems designed to engage in controversial potential practices such as social scoring and subliminal or other manipulative techniques.

While few would object to this, concerns have been raised that the Act will stifle innovation, holding back EU-based companies, as well as opening them up to endless legal challenges. This need not be the case.

The case for regulation

Although AI is not as new a technology as is often claimed, there is no question that moves by OpenAI and Microsoft since November 2022 have raised the prospect of its deeper integration into business and, with it, everyday life. And with ever more decisions being driven by data, it is hardly a surprise that legislators want to ensure that it is firmly under the control of the rule of law. 

AI clearly touches on a number of hot-button issues, including privacy and creators’ rights. In addition, the EU was never going to give free rein to organisations wishing to deploy technologies such as facial recognition or emotion recognition, nor was its potential use in contexts like employment, welfare or migration ever going to be anything other than controversial.

None of this should be understood as a proposed ban on AI, though.

Indeed, according to the EU Commissioner for the Internal Market, Thierry Breton, the AI Act aims to: “strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use”.

The AI Act should be understood as a continuation of the kind of governance the bloc has already developed in the form of the Digital Operational Resilience Act (DORA) and, in particular, the General Data Protection Regulation (GDPR).

Given this, threats by AI technology providers to pull out of the EU if the bloc pushed ahead with plans to regulate AI were always overblown. With a population of 450 million and a GDP of around €15 trillion, the EU is simply too important a market to ignore. Indeed, had US-based AI companies pulled out, the result would simply have been the creation of EU-based challengers.

However, the fact is that AI exists in what is already contested legal territory and, as a result, the EU AI Act will give certainty to both developers and users of AI. Indeed, AIs trained on works of art have run up against multiple lawsuits claiming copyright infringement, while a significant component of ongoing industrial action by film and television actors and writers relates to the potential uses of AI.

In the United States, existing laws already govern the use of AI, but the absence of federal law on the matter means businesses could face adopting different policies if they operate across states or at the very least, find themselves at risk of unexpected fines. This kind of a hodge-podge of conflicting regulations is arguably a greater barrier to the widespread adoption of AI than any reasonable EU AI Act.

Understanding the EU’s AI Act

First proposed by the European Commission in 2021, the EU AI Act is expected to become law later this year. On June 14, 2023, the European Parliament approved its version of the draft Act, opening the door for other EU institutions to now enter negotiations to reach an agreement on the final text.

As with the GDPR, the EU Artificial Intelligence Act will be implemented universally in each of the 27 Member States without the need for each to transpose it locally into national laws and, therefore, without the variations this transposition creates.

Make no mistake: this is far-reaching legislation and will have an impact on how companies develop AI systems as well as how they deploy them.

The Act’s cornerstone is a classification system that will be used to determine the level of risk an AI technology could pose. As it currently stands, the Act will classify AI systems according to four levels of risk, ranging from minimal to unacceptable:

  • Unacceptable risk AI: Any application of AI ranked as unacceptable will be banned. Its use in ‘social scoring’ or ‘social credit’ systems, for example, will be outright forbidden.
  • No one other than law enforcement agencies will be allowed to use remote surveillance techniques using biometrics.

Applications of AI that receive other classifications, from high-risk to limited-risk to minimal-risk, will be permitted but subject to regulation according to their ranking.

  • High-risk AI: AI systems used in high-risk situations, such as transport, welfare, employment and education, where decision-making could affect a person’s life, will be required to guarantee transparency and accurate data. Companies working in high-risk areas will be required to conduct a prior “conformity assessment”. Included in this category are facial recognition and automated recruitment decision-making.

In an echo of the GDPR, any violations will result in fines of up to 6 per cent of a company’s annual global revenue.

In addition, the European Commission will create a publicly-accessible database to which AI providers will be obliged to provide information about their high-risk AI systems.

  • Limited-risk AI: AI systems in this category, such as chatbots and image manipulation tools, will be regulated largely in relation to transparency. For instance, chatbots will have to tell users that they are interacting with a machine and, in cases such as customer service, offer the option of escalating to a human. 
  • Minimal-risk AI: All other AI systems, such as spam filters and video games, can be developed and used without additional legal obligations outside of existing legislation.

In short, while the Act bans AI systems that could pose an unacceptable risk to human rights, democracy or fundamental freedoms, it will not stop organisations from deploying AI. In addition, while the Act will apply to commercial applications of AI it will not restrict scientific research.

Strict regulations will be placed on high-risk AI systems, such as those used in facial recognition and in the law and by law enforcement agencies. It also requires developers and users of AI systems to comply with certain ethical principles, such as transparency, fairness, and accountability.

Under the regulation of so-called “foundation models”, today’s generative AI systems will require mandatory labelling for AI-generated content and mandatory disclosure of training data covered by copyright.

International Divergence

One issue the Act does bring up is the potential for international divergence. While the EU will move as a single bloc, other major countries, blocs and markets will not. For example, the United Kingdom is expected to pursue looser regulation than that proposed by the EU.

British Prime Minister Rishi Sunak has said he wants the UK to be a global hub for AI, including for its regulation. “I want to make the UK not just the intellectual home but the geographical home of global AI safety regulation,” he said in June.

Mujatba Rahman, managing director of the risk analysts at the Eurasia Group, told Time magazine: “Fundamentally, the [British] government is trying to articulate a middle way forward between the very robust regulatory approach to the EU is converging upon and the more light-touch approach in Washington”.

However, in March, the US government’s International Trade Administration described UK plans as being a “‘light touch’ and ‘pro-innovation approach’ to AI regulation, which is reassuring to US companies active in the UK AI market”. Indeed, the UK government published analysis on AI governance certainly points to lighter regulation than the EU plans. Given the fact that the EU market cannot be ignored, it is therefore likely that, as with the GDPR, the EU as a whole is hoping that its regulations are seen as the ‘gold standard’ and influence behaviour outside the bloc.

For its part, the EU has already sought to align its definition of AI to that provided by the Organisation for Economic Co-operation and Development (OECD) in order to support international cooperation.
At present, the EU’s working definition is: “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.” Notably, this does not cover simple automation technologies, though the definition may be subject to change as the Act progresses.

Taming “wild AI”

What the AI Act recognises is something that businesses also need to get to grips with: AI is not the future – it’s the present. A Reuters/Ipsos poll published this month found that “some 28 per cent of respondents to the online poll on artificial intelligence between July 11 and 17 said they regularly use ChatGPT at work, while only 22 per cent said their employers explicitly allowed such external tools”.

It’s no surprise: AI has already proved its usefulness to many workers, notably among coders but particularly in administrative tasks, and is often being used in an unauthorised and uncontrolled manner. This is “Wild AI”, and it is a serious risk for businesses.

The reality of Wild AI is that today, businesses of all sizes are now faced with serious risks, including leakage of corporate information, inaccurate results and serious breaches of compliance legislation.

Like an extension of “shadow IT”, and arguably even more serious, wild AI is the uncontrolled proliferation of AI-based solutions in an organisation configured, built and deployed outside of the purview of the IT department. Just like Access databases, Office Macros and Robotic Process Automation (RPA), AI is now in the hands of non-IT staff and could easily create significant liability for any business. 

It’s not just words, either. The likes of ChatGPT have had the lion’s share of attention, but chatbots are really only the tip of the AI iceberg. Hundreds of AI-based applications are created every few months offering tantalising capabilities, which can be as diverse as video production to process automation, as easily consumed services, and many can even be installed locally.

TEKenable protecting its customers

As a business based in Ireland, the heart of the EU’s technology sector, but with a multinational client roster and workforce, as well as operations in the UK, at TEKenable, we have been closely following the development of the AI Act.

From our point of view, AI can only be a success if it is accepted both by businesses and consumers. In order to achieve acceptance, it will have to be trustworthy, both in terms of the results it produces and in how it operates. To do this, we have to ensure that AI is deployed with real transparency and in full compliance with legislation that protects privacy as well as keeping business data confidential.

Businesses agree with us: one need only look at the widespread blocking of public versions of ChatGPT from corporate networks to see their concerns at work.

But we can also support businesses that have deeper and more difficult use cases for AI. For example,  much of the additional requirements placed on High-Risk AI developments can be accommodated in our Secure Software Development Lifecycle and ISO13485 Software As a Medical Device processes. Given our experience working in areas such as healthcare, we already take a risk-informed approach to software delivery.

The EU AI Act is a challenge, that’s for sure, but it is one we are ready to rise to.


Did you find your read useful? Stay up to Date with our Insights.

    Be our Next Succesful Study

    Further Reading

    Be our Next Successful Study

    Get in Touch with TEKenable

    Get in Touch with TEKenable