#4 Whos´s the Boss Now? The AI Act Sets the Rules

Artificial Intelligence is no longer the wild west of technology – at least not in Europe. With the introduction of the AI Act, the European Union is stepping up to regulate AI just like it did with data protection through GDPR. But what does this mean for AI companies, businesses using AI, and everyday users? Let´s break it down in a way that even your AI-generated chatbot would understand.

The Origin of the AI Act: Why, When and Where?

The AI Act is the first-ever comprehensive legal framework for artificial intelligence worldwide. It was introduced by the European Commission in April 2021 and has gone through extensive debates, amendments and negotiations since then. The final version is expected to take full effect by 2025-2026 after a transition period.

The AI Act applies to all AI systems that are developed, marketed or used within the EU, even if the company behind them is elsewhere (yes, that includes Silicon Valley giants). Much like the GDPR, if your AI touches EU soil, you must play by EU rules.

But why the need for such an Act? Well, AI is powerful – but power unchecked can be dangerous. Facial recognition, automated hiring decisions, deepfake scams – AI is shaping our lives in ways regulators never anticipated. The Ai Act aims to prevent high-risk AI from causing harm while fostering innovation and trust.

New Laws: The AI Act´s Core Rules and What They Mean

1. A Risk-Based Approach: AI Categories

Not all AI is crated equal and the AI Act knows that. The AI Act categorizes AI systems based on their level of risk, with stricter regulations for riskier AI. Let´s break down the categories in more detail:

Unacceptable Risk: Banned AI Systems

These AI systems are deemed too dangerous to be allowed. They pose a clear threat to democracy, privacy or human rights. If you´re an AI developer, don´t think about using these in the EU.

Examples of Banned AI Systems:

  • Social Scoring – No Chinese-style AI ranking citizens based on behavior
  • Emotion Recognition in Workplaces & Schools – Your boss or teacher can´t use AI to track your emotions
  • AI Manipulating Human Behavior – AI systems that psychologically manipulate people (e.g. AI-powered scams that exploit vulnerabilities) are prohibited
  • Real-Time Facial Recognition in Public (with exceptions) – Law enforcement is mostly banned from using real-time biometric surveillance, except in extreme cases like terrorism.

High-Risk AI: Heavily Regulated

AI that significantly impacts health, safety, fundamental rights or critical infrastructure falls into this category. These AI systems can be used, but only under strict conditions.

What counts as high-risk AI?

  • AI for Hiring & HR Decisions – companies must prove that AI used in recruitment is non-discriminatory and explainable
  • AI in Credit Scoring & Banking – AI assessing creditworthiness must be fair, transparent and audited regularly
  • AI in Healthcare & Medical Devices – AI diagnosing diseases or assisting in surgery must be extensively tested for accuracy and safety
  • AI in Law Enforcement & Justice – Predictive policing, AI-driven crime analysis and sentencing assistance must have human oversight and transparency.
  • AI in Transport & Infrastructure – Self-driving cars, air traffic control and industrial automation must meet strict safety and cybersecurity requirements

Obligations for High-Risk AI Providers:

  • Transparency and Documentation
  • Human Oversight
  • Bias Testing and Risk Assessments
  • Compliance Checks Before Market Entry

Limited-Risk AI: Transparency Obligations

Some AI applications don´t pose significant risks but still require transparency so users know what they´re dealing with.

 Examples of limited-risk AI systems:

  • Chatbots and AI Assistants – Companies must disclose that users are talking to AI, not a human
  • Deepfakes & AI-Generated Content – Synthetic media (like AI-generated news anchors or realistic deepfakes) must be clearly labeled
  • Ai Recommendation Algorithms – Platforms like Netflix, Spotify and YouTube must provide more transparency on how their recommendation AI works

Unlike high-risk AI, these systems don´t require pre-market approval, but companies must ensure users understand when AI is influencing their decisions.

Minimal/ No Risk AI: Free to Use

AI with little to no societal impact faces no major restrictions under the AI Act.

Examples of minimal risk AI systems:

  •  AI-powered spam filters
  •  AI in video games
  • AI for email auto-suggestions

These AI systems are not subject to special compliance measures but still need to follow general consumer protection laws.

2. AI Developers, Listen Up! Compliance is No Joke

If you develop AI, the EU now expects full accountability. High-risk AI providers must:

  • Conduct conformity assessments before launching their AI
  • Keep detailed technical documentation (goodbye, mystery AI models)
  • Ensure AI doesn´t discriminate (no biased hiring or credit scoring)
  • Allow human oversight (no “computer says no” decision without appeal)

Non-compliance? Fines go up to €35million or 7% of global turnover – higher than even the GDPR!

3. Companies Using AI: What you need to know

If your business relies on AI, you`re not off the hook. Whether you use AI for hiring, customer service or fraud detection, you must:

  • Verify AI compliance before using it
  • Inform customers if AI influences their decisions (e.g. AI screening loan applications)
  • Ensure human oversight in critical decisions

Example: If a bank rejects a loan because AI flagged the applicant as high risk, the bank must explain why and allow an appeal.

4. AI-Users: What´s in it for you?

For regular AI users, the AI Act brings transparency and protection. Expect:

  • Clear labeling of AI-generated content (bye-bye undetectable deepfakes)
  • The right to know when AI makes decisions about you
  • Bans on manipulative AI that tricks you into bad choices

Bottom line? AI in the EU will be safer, fairer and more transparent.

The AI Act vs. GDPR: How They Work Together

The GDPR protects personal data, while the AI Act regulates AI systems. They complement each other but have different focuses:

Together, they create one of the strictest tech regulatory frameworks in the world. If the GDPR forced businesses to rethink data practices, the AI Act will forces them to rethink AI governance.

Final Thoughts

The AI Act is a game-changer, setting the gold standard for ethical and safe AI. For AI companies, compliance is no longer optional – it´s the cost of doing business in Europe. For companies using AI, transparency and fairness are now legal must-haves. And for users? AI will be safer, more transparent and (hopefully) less creepy.

While some worry that the law could slow down innovation, the EU is betting that trustworthy AI will win in the long run. After all, a world where AI is accountable, fair and non-manipulative sounds like one we all want to live in – whether you´re a tech CEO or just someone trying to get a fair loan application. Europe is setting the rules of the AI game. The question is: Who´s ready to play?

Stay curious, stay informed, and let´s keep exploring the fascinating world of AI together.

This post was written with the help of different AI tools.

Recommended Read:

Disclaimer: The links provided on this blog lead to external websites that are not under my control. I do not guarantee the accuracy, or the content of those sites. Visiting these sites is at your own discretion and risk.

Check out previous posts for more exiting insights!