EU AI Act – Designing the 'CE Mark' for High-Risk AI Compliance

EU AI Act, the world’s first comprehensive artificial intelligence regulation, has extraterritorial effect – meaning it applies to companies from New York, Geneva, and around the world that want to place AI systems or products on the EU market. The Act introduces a hierarchy of risk, with the greatest obligations placed on High-Risk AI systems (e.g., in healthcare, finance, and employment).
For these systems, companies must actively prove that the AI system is transparent, robust, unbiased, and under adequate human oversight. It is precisely in this complex documentation process that Legal Design Thinking (LDT) becomes essential.
The risk lies not only in creating an ethical AI system, but in proving it.
- Legal Fog: The Act’s requirements are written in legal language, not operational instructions. Engineers and lawyers often don’t understand each other’s obligations.
- Auditability: Regulators demand quick and clear compliance verification. Long, textual documents only slow down the audit and increase the risk of penalties (which can reach up to €35 million or 7% of annual global turnover).
- Human Oversight: How can you visually prove that a human has truly taken responsibility for an algorithm’s decision — and not just formally?
LDT is used here to transform bureaucratic obligations into functional and visually verifiable working tools.
The ultimate goal is to obtain the CE compliance mark for the AI system. The CE mark is your guarantee that your product (whether a physical toy or a complex AI algorithm) meets the minimum European standards before entering the EU market.
LDT achieves this by designing a visual and transparent Compliance Management System:
Visual AI Risk Map (The Risk Classification Map):
- LDT designs an interactive map that visually, step by step, guides the team through risk classification (unacceptable, high, limited).
- The map clearly shows, through color coding, which regulatory article of the EU AI Act applies, allowing engineers to understand the legal context of their work.
- Human Oversight Dashboard:
For high-risk systems, LDT creates a control panel that visually shows the level of autonomy of the AI system.
The dashboard uses icons to alert the operator when the AI suggests a decision that falls outside the usual tolerance, forcing a human to input their decision and document the reason — thereby creating undeniable legal proof of human oversight.
LDT converts hundreds of pages of technical specifications (evidence of accuracy, robustness, cybersecurity) into visually organized, labeled, and searchable modules. This visually simplified documentation allows regulators to conduct audits in record time, directly reducing regulatory risk.
The EU AI Act imposes a global obligation of "AI by Design." LDT is the methodology that ensures the AI system is not only technically sound but also legally and ethically designed to be trustworthy. By designing a verifiable compliance system, companies protect their global ambitions and avoid massive fines.
Is your AI system waiting for the EU to stop it, or is LDT designing it for global success?
Other blogs
The Era of Deepfake - Designing a Legal Protocol for Authenticating Corporate Communications
The advent of generative AI has enabled the mass production of Deepfake (AI-generated) audio…
Ownership in the Age of Autonomous AI – How to Design a Visual Attribution Protocol for Agents
Generative artificial intelligence has brought the first wave of disruption to the Intellectual…