Contact

Regulation of AI: A close look at the EU's AI regulation (AI Act)

In mid-June 2023, the EU Parliament agreed on a system for regulating AI. Read here to find out exactly what the AI regulation says, how companies are affected by the development and use of AI systems, who is in favour of it and what points of criticism have been raised.

For over two years, the European Union has been endeavouring to regulate artificial intelligence (AI). With the approval of the "AI Act" in the European Parliament, this goal is now within reach. Among other things, the MEPs are calling for an extension of the list of bans proposed by the EU Commission, including biometric classification systems based on sensitive characteristics and preventive police systems. Once an agreement has been reached, talks can be initiated with the EU countries.

 

AI regulation: when AI falls into the wrong hands

For a EU AI Regulation it's high time, because: "AI is already omnipresent in our everyday lives," says Carina Zehetmaier, President of Woman in AI Austria, at the Deep Dive Meet-ups of INDUSTRIEMAGAZIN. She sees huge potential in the application of AI and believes that it will generate considerable added value.

However, it also points to possible Risks of abuse which should be curbed by regulation. A recent case from China shows what can happen if AI falls into the wrong hands. A company there has applied for a patent to specifically identify the Uyghur minority using AI and facial recognition. "We can only speculate what the goal was," says Zehetmaier.

She adds that problematic use of AI is now occurring more frequently, such as the example of Amazon, which used AI for recruitment. Based on the available data, the system concluded that a disproportionate number of male employees in the company would be positive. As a result, not a single woman was invited to a job interview.

Zehetmaier therefore sees AI as a reflection of society: "AI is not objective or neutral. We tend to be more tolerant of machines than humans. AI ultimately reflects our beliefs and views."

Until 2026 the AI Regulation (AI Act) is due to come into force. The law has cleared the first major hurdle in the European Parliament in Brussels and the details are now being discussed further in the trialogue. Zehetmaier emphasises how the law will affect the economy: "The law not only affects companies that produce AI systems, but every company that uses AI."

The EU aims to regulate the application and not the technology. A further aim is to achieve global ethical standards as has already been done with the General Data Protection Regulation has been achieved. However, even strict regulation of AI will not be able to completely prevent misuse.


Carina Zehetmaier, President of Woman in AI Austria

 

The AI Regulation Act in detail

Which Key points now includes the new law on Regulation of AI? The proposed legislation follows a risk-based model where regulation depends on the risk posed by the AI. It bans AI systems with unacceptable risks, such as those that could be used for social scoring. All AI applications are to be categorised according to their risk. Depending on their classification, providers must fulfil certain safety and transparency standards.

What risk classifications are there?

  1. Unacceptable risk: particularly harmful AI applications that violate EU values, for example because they infringe fundamental rights. This applies in particular to social scoring (the assessment of social behaviour by authorities), the exploitation of children's vulnerability or the use of technology to exert subliminal influence.
  2. High risk: AI systems that adversely affect people's safety or their fundamental rights (protected by the Charter of Fundamental Rights of the European Union).
  3. Low risk: Special transparency obligations are imposed on certain AI systems, for example if there is a clear risk of manipulation, such as through the use of chatbots. Users should be aware that they are dealing with a machine.
  4. Minimal risk: All other AI systems can be developed and used in compliance with generally applicable law.

MEPs also want AI systems that can influence elections or election results or pose a significant threat to human health, safety, fundamental rights or the environment to be added to the high-risk list. The Commission's draft provides for specially adapted rules for the reasonable use of generative AI, such as ChatGPT.

To optimise the Promoting innovation in AIthe parliamentarians are proposing exemptions for research activities and the use of so-called AI real-world laboratories. Florian Tursky (ÖVP), State Secretary for Digitalisation, expressed his satisfaction with the result and emphasised the urgency of rapid regulation of AI.

 

The roadmap of the AI Act

The AI Act is intended to Uniform Europe-wide standards that will significantly influence the future development of AI systems. Once the EU countries and the European Parliament have finalised their positions, negotiations on the final legislative text can begin. The start of these talks is still undecided.

According to a press release, an AI expert group and a European AI alliance were established back in March 2018. The first meeting of these two groups took place in June 2019. However, a corresponding AI package from the Commission has been on the table since April 2021.

 

Global relevance for AI regulation

The newly introduced EU AI Regulation becomes Worldwide relevance as it extends to companies that provide or use AI systems within the EU. Even if providers are based outside the EU, such as in the UK or the US, the law would apply if their services are used in the EU.

Critical point A key issue in the discussions was determining which AI applications should be banned due to their unacceptable risk. Initially, a ban on AI-supported tools for monitoring interpersonal communication was considered, but this proposal was rejected. Instead, the use of software for biometric identification was restricted.

This identification software, which was originally only prohibited for real-time applications, may now only be used for serious criminal offences and with prior judicial approval. In addition, the use of AI-supported emotion recognition software in the areas of law enforcement, border management, the workplace and educational institutions has been prohibited.

The current draft law does not yet contain any guidelines for requirements for developers of generative AI such as GPAIS (General Purpose AI Systems) - categorisation as high-risk is currently being discussed in politics. Violations of the law could result in fines of up to 30 million euros or six per cent of global profits, whichever is higher.

 

Criticism of the EU's AI regulation

Representatives of the AI Austria think tank express concerns regarding the Europe's competitiveness and emphasise "that the AI Act must also promote Europe's innovative capacity in the field of artificial intelligence". Study among European AI companies believe 50 per cent of AI start-upsthat the upcoming law will hinder AI innovation in Europe. A further 16 per cent are considering stopping AI development or relocating it to countries outside the EU. These concerns also stem from the fact that Europe is already lagging far behind in terms of investment. According to the studies referred to by the think tank, around 53 per cent of global private investment in AI developments is made in the USA and 23 per cent in China. Europe lags far behind with only six per cent.


Hartmut Rauen, Deputy Managing Director of the VDMA

The VDMA is in favour of the intention to strengthen the use of data in the internal market. "However, we are concerned about the actual implementation of the Data Act, as we see many potential risks for the data-driven business models of companies in the mechanical and plant engineering sector. Although the version adopted by the EU Parliament shows minor improvements, the concerns of the industrial sector are still not sufficiently taken into account," says Hartmut Rauen, Deputy Managing Director of the VDMA.

Rauen criticises that the Data Act does not take sufficient account of the differences between business relationships between companies and consumers (B2C) and between industrial companies (B2B). In B2B relationships, the companies involved could optimise the conditions for both sides.

"We need this flexibility in industry in order to represent and balance the diverse situations in our value chains. The Data Act limits this freedom and makes customisation more difficult - both in individual contractual relationships and in industrial data initiatives such as Manufacturing-X," continues Rauen.

For the trilogue negotiations, the VDMA is calling for the Data Act to take into account the necessary contractual leeway for B2B data exchange and not unnecessarily interfere with business relationships. In addition, the protection of trade and business secrets must continue to be effectively ensured. In addition to content-related issues, practical transition periods must also be ensured.

For many companies, the Data Act not only represents a bureaucratic challenge but also requires a reassessment of business models and the revision of contracts and products. If the Data Act is implemented correctly, it can lay the foundations for a leading European ecosystem of intelligently networked production. Mechanical engineering in particular could play a key role here. It would be regrettable if this opportunity were to remain unutilised.

Similar posts

en_GBEnglish (UK)