Deloitte Internet Regulation Updater
The EU AI Act (Act) is entering the final phase of its legislative journey. The final text for this important piece of AI regulation is expected to be approved around the end of the year. However, even when in force there is a transition period (which is currently 24 months in the draft text), meaning most of the rules will not actually apply to those in scope of the Act for a significant lead time. This creates a tension given the pace of AI development and adoption. As a result, the EU is considering how to control AI risk ahead of the Act applying. The EU Commissioner for Internal Market, Thierry Breton, is proposing a voluntary AI pact between companies and the European Commission to bring in tighter rules ahead of the Act coming into force.
In that context it is important to consider the implications of the AI Act regardless of the lead time and to keep abreast of developments and changes to the draft rules.
Below is a recap on the current position: the key features of the Act, where the Act is in the EU legislative process, when it might become law and the key changes that the European Parliament has introduced at its review stage.
Background
The Act is the first piece of comprehensive legislation that regulates the use of artificial intelligence (AI) in the EU.
The European Commission first proposed the Act in April 2021 to ensure AI systems are transparent, traceable, non-discriminatory and safe.
The Act creates a set of rules that establishes obligations for AI users and providers depending on the risk level of the AI.
- AI systems that represent an ‘unacceptable risk’ which are considered a threat to people, such as social scoring systems that classify people based on behaviour, socio-economic status or personal characteristics, will be banned.
- ‘High risk’ AI systems, such as those that negatively impact safety or fundamental rights, will be subject to a conformity assessment requirement before they are put out to market. The assessment aims to verify that the AI system has adequate internal controls or other quality management systems in place, as well as having the availability of appropriate compliance documentation, traceability of results, transparency and human oversight, and meets the accuracy, robustness and security requirements. Any substantial modification to such AI systems must undergo a further conformity assessment, however there is a carveout for AI systems which continue to ‘learn’ after being placed on the market if the changes to algorithm/performance are pre-determined and included in the initial conformity assessment.
- ‘Limited risk’ AI systems will be subject to more limited transparency requirements. These include ensuring the user is made aware when they are interacting with AI.
- Generative AI will be subject to transparency requirements including designing the AI model to restrict it from generating illegal content.
Latest developments
On 14 June 2023, the European Parliament approved its version of the Act. Key proposals in the Parliament’s version of the Act include:
- Expanding the list of banned AI systems to include intrusive and discriminatory uses of AI such as remote biometric identification systems in public spaces, emotion recognition and predictive policing systems;
- Expanding the ‘high risk’ category to include AI systems that pose significant harm to people’s health, safety, fundamental rights or the environment, used to influence voter behaviour and election outcomes or used by very large online platforms in recommender systems;
- New requirements on providers of foundation AI models (which include generative AI models), including to demonstrate ‘compliance-by-design’ throughout AI development to ensure appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity, and to register the AI model in an EU database; and
- Increasing the potential fines for non-compliance from the higher of EUR 30 million or 6% of the total worldwide annual turnover (in the Commission’s version) to the higher of EUR 40 million or 7% of the total worldwide annual turnover.
The European Parliament has commenced negotiations (trilogue) with the European Commission and the EU Council to agree and approve a final text of the Act. The main areas of dispute are expected to be the high-risk categories and fundamental rights.
Shortly after adoption of the final text, the Act would be directly applicable in EU member states (without the need for any national implementing legislation). It is anticipated that the EU will reach agreement on the Act by the end of Q4 2023 (or Q1 2024, at the latest). However even if the Act is adopted by this timeframe, the earliest the Act will apply is in 2025 (given the transition period for most rules is currently 24 months). As noted above, the EU is considering how to manage AI risks in the meantime, including by voluntary adoption of the framework by those in scope.
Your contacts
If you would like to speak to the Deloitte team supporting clients on complying with fast-paced global internet regulation, please contact:
Joey Conway, Internet Regulation Partner, Legal Lead
Nick Seeber, Internet Regulation Lead Partner
Aurora Pack, Internet Regulation Associate Director, Legal
To stay up to date on any developments, please subscribe to our blog. In the meantime, please reach out to your usual contact in our team if you would like to discuss any of these proposals further.