Internet Regulation Updater
Enjoy our posts? Please subscribe (see below).
The European Parliament voted in favour of the Compromise Text of the EU AI Act on 11 May 2023. The plenary vote is expected to take place in mid-June, which will commence the EU’s trilogue process (i.e. the three-way discussion between the European Parliament, Commission and Council). It is anticipated that this process will be concluded, and the AI Act passed, around the end of 2023 or beginning of 2024. When passed, a transition period will begin.
In the meantime, the Compromise Text expressly includes the recent addition to the High Risk systems category (Annex 3) of recommender systems of Very Large Online Platforms (VLOPs). A number of VLOPs have recently been designated under the Digital Services Act (DSA) and are required to produce a systemic risk assessment (which includes consideration of recommender systems) by 25 August 2023. VLOPs are required to implement risk mitigation measures after conducting such risk assessment and are also subject to an annual external audit requirement to assess their compliance with DSA obligations.
This addition to the AI Act underscores the EU’s focus on AI and algorithmic systems as key risks that need to be managed. The Compromise Text to the AI Act sits alongside the recent stand-up of the European Centre for Algorithmic Transparency (ECAT) and draft DSA audit rules which also call-out the need for specific audit of AI/algorithms. Indeed, AI and algorithms are the topic of recent and in-flight body of EU internet and digital regulation and are increasingly a significant focus area of global policy makers. Having a holistic approach to risk management of AI and algorithms and an understanding of the breadth of the implications across the developing regulatory landscape is key to effectively mitigating risk, ensuring compliance and navigating this complex area at the intersection of emerging technology and legislative change.
What are the changes to the draft AI Act?
Rationale for inclusion:
- The Compromise Text states that given the number of users of social media platforms, their use can strongly influence “safety online, the shaping of public opinion and discourse, election and democratic processes and societal concerns”. Accordingly, legislators have now included AI recommender systems used by social media platforms that are designated as VLOPs (under the DSA) as being subject to the AI Act. This means that such VLOPs will be required to comply with obligations under the AI Act regarding technical requirements on data governance, technical documentation and traceability, transparency, human oversight, accuracy and robustness.
Overlap with DSA:
- The recitals of the AI Act indicate that compliance with the AI Act should enable VLOPs to comply with their broader risk assessment and risk mitigation obligations in Article 34 and 35 of the DSA.
In this respect, this presents an opportunity for regulators and courts to streamline their approach in assessing risk and compliance by VLOPs under the AI Act and Article 34 and 35 of the DSA, which in turn would likely simplify this area of regulatory compliance for VLOPs. However, the extent to which this prevents overlap and regulatory fragmentation regarding obligations for VLOPs under the DSA remains to be seen in practice.
- The recitals of the AI Act also suggest that the authorities designated under the DSA should act as enforcement authorities for enforcing the AI recommender systems provisions under the AI Act.
From a practical standpoint for VLOPs, this may mean that the DSA Compliance Officer has a role in regulatory engagement regarding the AI Act.
It’s all about risk:
- An important compromise to be aware of is that legislators have now included an additional requirement before a system is considered as “high-risk” under the AI Act (i.e. being on the Annex 3 list is not enough per se). The system must also “pose a significant risk of harm to the health, safety or fundamental rights of natural persons.” (Art 6(2) of the AI Act).
For VLOPs and AI recommender systems (and AI generally) this all therefore comes back to the risk assessment.
If you would like to speak to the Deloitte team supporting clients on complying with fast-paced global internet regulation, please contact:
Joey Conway, Internet Regulation Partner, Legal Lead
Nick Seeber, Global Internet Regulation Lead
Mark Cankett, Regulatory Assurance Partner, Global Lead for Algorithm and AI Assurance
Content from Deloitte's Internet Regulation blog can now be sent direct to your inbox. Choose the topic and frequency by subscribing here and selecting Internet Regulation.