Deloitte Internet Regulation Updater
Under the EU AI Act’s risk-based approach, the rules that are applied to AI systems depend on the risks they pose to health, safety and fundamental rights. The first compliance deadlines of the EU AI Act on 2 February 2025 included those relating to Prohibited AI practices. Using AI systems for any prohibited practices is deemed to be so risky that it is now banned, and any organisation found to be placing on the market, putting into service or using AI systems for such practices could face fines of up to 7% of their annual turnover from 2 August 2025.
Impact: In order to comply with the Prohibited AI rules (and identify and inventory AI models and systems in scope of further AI Act obligations), organisations need to carry out two key tasks: (1) identify AI systems that are in scope of the EU AI Act; (2) of those systems, identify any which are being used for prohibited practices and either remediate or decommission these systems.
Guidance: To support organisations with their compliance, the EU AI Office has published additional guidance around AI definitions and prohibited AI.
Key takeaways
Definition of AI systems under the EU AI Act
It is generally accepted that the definition of AI systems used in the EU AI Act is broad. The EU AI Act defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
In its guidance, published on 6 February, the AI Office seeks to provide further clarity to organisations to help identify AI systems in scope of the EU AI Act. The guidance highlights the seven main elements of the definition: (1) a machine-based system; (2) that is designed to operate with varying levels of autonomy; (3) that may exhibit adaptiveness after deployment; (4) and that, for explicit or implicit objectives; (5) infers, from the input it receives, how to generate outputs (6) such as predictions, content, recommendations, or decisions (7) that can influence physical or virtual environments.
The guidance provides further detail on each of the seven elements. We would encourage organisations to use this guidance to support the process of identifying AI systems in scope of the EU AI Act within your inventory. In particular, the guidance highlights that a system’s ability to infer is a key, indispensable condition that distinguishes AI systems from “simpler traditional software systems or programming approaches”, and that systems based on rules defined solely by natural persons to automatically execute operations are not defined as AI for the purposes of EU AI Act compliance.
Crucially, the guidance seeks to define systems that are not in scope of the EU AI Act. In particular, the guidance identifies systems which, whilst having the capacity to infer in a narrow manner, may nevertheless fall outside of the scope of the AI system definition because of their limited capacity to analyse patterns and adjust autonomously their output. Such systems include: systems for improving mathematical optimisation; basic data processing; systems based on classical heuristics; and simple prediction systems.
Definitions of Prohibited AI practices:
The AI Office also published, on 4 February, 140 pages of guidance to support organisations in identifying Prohibited AI practices as defined in the EU AI Act. This guidance clarifies the meaning of ‘placing on the market’, ‘putting into service’ or ‘use’ of an AI system in the context of Prohibited AI practices. It also highlights the interplay between Prohibited AI uses and high risk AI uses (for example, that emotion recognition systems which do not fulfil the conditions for being prohibited, will be classified as high-risk AI systems). Finally, it provides additional clarity on each of the elements of the Prohibited AI Article. For example, the guidelines clarify that social scoring may be permitted where it is lawful, uses only data collected for a specific purpose, and the response is justified and proportionate. The guidelines also highlight the need for harm to an individual or a group of individuals to be “significant” for a practice that subliminally manipulates or targets of vulnerable groups to be prohibited.
Next steps and how can Deloitte help
Organisations needing to comply with the EU AI Act should ensure that they have read and understood these pieces of guidance and considered the impact on their AI governance and processes. These guidelines are particularly helpful in the current situation where there is no case law on where to draw the lines, although it is likely that organisations will still need to define their own risk appetites and norms through an informed and thorough process. On AI system definitions, organisations will need to decide whether to adopt the guidance, including the exemptions, in full across their AI estate. On Prohibited AI, organisations will need to consider how they interpret terms such as “significant” for the purposes of making a final determination of whether a practice is prohibited. Providers of general-purpose systems designed for downstream integration will need to consider the steps they can take to ensure that their system is not used for a prohibited AI practice, for example technical guardrails and contractual clauses. A key challenge for all organisations is how to translate this guidance into something practical and understandable that can be rolled out within the organisation, without requiring the whole organisation to become AI, or EU AI Act, experts.
Deloitte is supporting organisations across a range of sectors with their compliance on the EU AI Act, including helping to create inventories of AI systems in scope of the EU AI Act; agreeing definitions of AI systems; implementing easy to use questionnaires to help organisations to identify Prohibited AI practices; building EU AI Act classification and approval processes into their AI governance; and decommissioning or remediating systems being used for Prohibited AI practices. In addition, we are helping to prepare organisations for upcoming EU AI Act compliance deadlines relating to general-purpose AI models and putting in place processes and governance to identify and manage the risks of high risk AI systems. If you want to find out more about how our multi-disciplinary digital regulation team can support you please contact a member of the team below.
Your contacts
Joey Conway, Internet Regulation Partner, Legal Lead
Nick Seeber, Global Internet Regulation Lead Partner
Scott Bailey, Director, Global Internet Regulation Lead
Piyush Goraniya, Senior Associate, Global Internet Regulation Lead