EU AI Act – Designing the 'CE Mark' for High-Risk AI Compliance

EU AI Act – Designing the 'CE Mark' for High-Risk AI Compliance

EU AI Act, the world’s first comprehensive artificial intelligence regulation, has extraterritorial effect – meaning it applies to companies from New York, Geneva, and around the world that want to place AI systems or products on the EU market. The Act introduces a hierarchy of risk, with the greatest obligations placed on High-Risk AI systems (e.g., in healthcare, finance, and employment).

For these systems, companies must actively prove that the AI system is transparent, robust, unbiased, and under adequate human oversight. It is precisely in this complex documentation process that Legal Design Thinking (LDT) becomes essential.

The risk lies not only in creating an ethical AI system, but in proving it.

  • Legal Fog: The Act’s requirements are written in legal language, not operational instructions. Engineers and lawyers often don’t understand each other’s obligations.
  • Auditability: Regulators demand quick and clear compliance verification. Long, textual documents only slow down the audit and increase the risk of penalties (which can reach up to €35 million or 7% of annual global turnover).
  • Human Oversight: How can you visually prove that a human has truly taken responsibility for an algorithm’s decision — and not just formally?

LDT is used here to transform bureaucratic obligations into functional and visually verifiable working tools.

The ultimate goal is to obtain the CE compliance mark for the AI system. The CE mark is your guarantee that your product (whether a physical toy or a complex AI algorithm) meets the minimum European standards before entering the EU market.

LDT achieves this by designing a visual and transparent Compliance Management System:

Visual AI Risk Map (The Risk Classification Map):

  • LDT designs an interactive map that visually, step by step, guides the team through risk classification (unacceptable, high, limited).
  • The map clearly shows, through color coding, which regulatory article of the EU AI Act applies, allowing engineers to understand the legal context of their work.
  • Human Oversight Dashboard:
    For high-risk systems, LDT creates a control panel that visually shows the level of autonomy of the AI system.

The dashboard uses icons to alert the operator when the AI suggests a decision that falls outside the usual tolerance, forcing a human to input their decision and document the reason — thereby creating undeniable legal proof of human oversight.

LDT converts hundreds of pages of technical specifications (evidence of accuracy, robustness, cybersecurity) into visually organized, labeled, and searchable modules. This visually simplified documentation allows regulators to conduct audits in record time, directly reducing regulatory risk.

The EU AI Act imposes a global obligation of "AI by Design." LDT is the methodology that ensures the AI system is not only technically sound but also legally and ethically designed to be trustworthy. By designing a verifiable compliance system, companies protect their global ambitions and avoid massive fines.

Is your AI system waiting for the EU to stop it, or is LDT designing it for global success?

Other blogs

CSRD: Kada ESG postaje lični rizik. Kako Dokazna arhitektura pomjera ESG iz održivosti u odgovornost

CSRD: When ESG becomes a personal risk. How Evidence Architecture Moves ESG from Sustainability to Responsibility

March was traditionally reserved for closing the financial books. But from 2026...

Ko će potpisati? CSRD i kraj kolektivne odgovornosti u regionalnim kompanijama

Who will sign? CSRD and the end of collective responsibility in regional companies

March in the Balkans is traditionally a month of final accounts. But in 2026, March brings…

ENG