AI Act is a matter of learning in practice

Europe has a first: we are the first to have our own legislation around AI. With the AI Act, European countries want to somewhat frame and classify AI systems based on risk levels. Because while these technologies create a world of opportunities, care is needed. What does the AI Act entail and what does it mean in practice for us?

Technology is catching up with us more and more. Sometimes new products and services are available "just like that. These rapid developments mean that we haven't always figured out how to deal with that technology. What we think of it and what are do's and don'ts. It soon became known that AI has a lot of potential, but there are also certain risks. Due to the lack of laws and regulations, there is a large gray area where it is not known whether there are risks to security, privacy or performance. For example, it is often not known on what basis systems make predictions or decisions, nor what happens to the data entered.

AI use by the central government

In October 2024, the Court of Audit published the report Focus on AI in the Central Government. It surveyed 70 government organizations on the use of AI systems. There turned out to be 433 of them. Most are used for internal processes, with no direct impact on citizens and businesses. For example, to search through large amounts of information or optimize processes. The study found that of more than half of these systems, opportunities and risks are not known. There is even a tendency to underestimate risks. Also, only 5% of the systems are included in the public algorithm registry. In the absence of clear objectives and evaluations, AI systems are used without understanding their performance or potential risks.

The AI Act: a single European regulation for trusted AI

The AI Regulation, legally also known as (EU) 2024/1689, is the first comprehensive legal framework for AI worldwide. With the regulation, the European Union wants to ensure that we use mostly reliable AI applications. It does this by classifying systems based on four levels of risk:

  1. Unacceptable risk: These are AI systems that pose a clear threat to people's safety, livelihood and rights. The law prohibits 8 practices, such as harmful AI-based manipulation or exploitation of vulnerabilities and real-time remote biometric identification for law enforcement in publicly accessible areas.
  2. High risk: This revolves around AI applications that may pose serious risks to health, safety or fundamental rights. These include AI in critical infrastructures, educational institutions and law enforcement. These systems have to meet strict requirements and are not simply marketed.
  3. Transparency risk: This refers to AI systems where it must be clear to the user that they are interacting with a machine, such as a chatbot. Also, AI-generated content must be clearly identifiable.
  4. Minimal or no risk: Most AI systems fall into this category and pose no or negligible risk. Therefore, they are not regulated.

 

The AI Act in practice

The AI Regulation requires all AI systems to be risk assessed by authorities. This is the task of the European AI Bureau. In addition to risk calculations, there are also requirements for users (AI literacy) and general AI models. The New Wave IT keeps a close eye on all developments surrounding AI. We delve into the risks around security, privacy or performance, so that we can relieve our customers in the safe use of AI and think along on a strategic level about the deployment of AI in the current IT landscape. In doing so, we apply European legislation to our AI solutions, such as our chatbot Novi, and ensure that we comply with the requirements.

Want to know how to safely deploy AI within your organization? Let's talk!