Services About Cases Blog FAQ Contact DA
Nordisk smedescene — Brokk & Sindre hero-billede

EU AI Act: the deadline most Danish companies have forgotten

| 5 min. read |
ai-strategyeu-ai-actregulation

On 2 August 2026, the majority of the EU AI Act comes into force. That is five months away. And most Danish companies have done nothing about it.

It is not because the regulation is new. It entered into force on 1 August 2024. It just has not felt urgent, because the first rules only prohibited the most obvious abuses: social scoring, manipulation, biometric surveillance in public spaces. Things your company was probably not doing anyway.

But August 2026 is a different category.

Who are you in the eyes of the regulation?

The central concept you need to understand is “deployer” – the word Article 3 of the AI Act uses for “a natural or legal person using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.”

In short: if your company uses AI in a professional context, you are a deployer. That is not only software developers and tech companies. It is everyone using AI tools in workflows that affect other people.

The interesting thing is what qualifies as high risk. Risk levels for AI systems are determined not by the technology but by what it is used for. CV sorting and recruitment tools are explicitly listed as high-risk. So is credit scoring. So are educational assessments that affect access to courses and training programmes. All of these systems will from August 2026 require risk assessments, documentation, human oversight, and logging.

Transparency requirements apply to most companies

Even if your company does not touch any of the high-risk categories, the transparency requirements apply to you. From August 2026, AI-generated content must be labelled. Chatbots must inform users that they are talking to a machine. Deepfakes and AI-generated text published as journalism must be clearly marked.

That may sound like a small thing. But consider how many companies today use AI for customer service, newsletters, product descriptions and press releases without any labelling whatsoever. That will not be legal from August 2026.

The European Commission describes this as “specific disclosure obligations to ensure that humans are informed when necessary to preserve trust.” That is a gentle formulation for something that comes with fines of up to 35 million euros or 7 percent of global turnover.

This is not about technology

The problem with the EU AI Act is the same as with GDPR: most companies waited until it was too late, then spent a fortune on panic implementation.

The difference is that the AI Act does not require you to build anything new. It requires you to understand what you already have. Which AI systems are you using? For what purposes? Who is the supplier, and what does the documentation say about intended use and risk level? Do you have processes for human oversight where AI affects employees or customers?

These are questions of AI strategy, not technology. And they take time to answer properly – not because they are complex, but because they require you to actually map what is happening inside your organisation.

Five months is enough, if you start now

I am not saying you need to engage an army of legal advisors. I am saying that over the coming months you should know: which AI systems are we using, what category do they fall into under the regulation, and what does that require of us as a deployer?

For many companies the answer will be that they primarily use AI in low-risk contexts, and that the transparency requirements are what actually needs action. That is manageable. But it requires someone to actually look at it.

Get in touch to talk through what the AI Act means for your company, or read more about how we help with AI strategy and advisory.

Contact me

Let's talk about how AI can elevate your business

Contact me