Loading…
Loading…
A category in the EU AI Act for AI systems that pose limited risk and face lighter compliance obligations — primarily transparency requirements. Limited-risk systems include chatbots (users must be told they are interacting with AI), AI that generates synthetic content (must be labeled as AI-generated), and AI that manipulates images or video (deepfakes must be disclosed). Most commercial AI applications — customer service bots, AI writing tools, AI image generators — fall in this category. Limited-risk obligations are achievable without significant compliance overhead.
Why this matters for your team
Most commercial AI applications fall in this category. The core obligation is simple: tell users when they're interacting with AI and label AI-generated content. This is achievable with straightforward UI changes and saves you from significantly larger compliance obligations.
A company's AI customer service chatbot is a limited-risk AI system. The EU AI Act requires that users be clearly informed they are talking to an AI — a disclosure notice at the start of the conversation satisfies this obligation.