AI is infiltrating the workplace, with businesses using it to make processes more efficient and cost-effective. Generative AI (GenAI) is a growing area of AI that uses machine learning to create new content based on the patterns and structure of input training data.
HR departments are using GenAI-enabled tools to help identify potential candidates for recruitment and promotions. However, there are legal and ethical impacts to consider.
Employment law considerations and risks
Using GenAI tools in recruitment carries the potential for discrimination and bias. It can lead to biased decisions, potentially breaching the Equality Act 2010. GenAI models use complex algorithms to generate outputs. If a GenAI model is trained on a dataset that is biased, it may produce a biased output. E.g., if a GenAI tool is trained on CVs of predominantly male employees, it may favour male candidates, leading to discrimination against female candidates. This bias can similarly affect decisions related to employee promotions.
In practical terms, this could result in employers missing out on the top candidates because they have been excluded before meeting a human interviewer, or making sub-optimal recruitment decisions overall (for example, hiring too many similar individuals instead of a workforce with diverse skill sets). To achieve a diverse and inclusive workforce (and meet DE&I targets), it is crucial for employers to minimise or eliminate biased decision-making.
In addition, employers are duty-bound to make decisions in good faith and with transparency. However, when decisions are made without human input, it can be difficult for employers to explain the reasoning behind their decisions or to demonstrate their objectivity. This could lead to situations where employers find it challenging to uphold their duty of trust and confidence, or to defend a discrimination claim.
Data protection considerations and risks
AI can automate parts of the recruitment process from initial candidate screening to assessing a candidate against a job specification. Candidates can also access platforms that analyse their skills and experience and match these to available opportunities. While this technology offers benefits, it also brings potential privacy and data protection risks, in addition to the possibility of bias leading to unfair outcomes for individuals.
Key requirements that must be met when utilising personal data for AI-enhanced e-recruitment include:
- adhering to the data protection principles (especially transparency, fairness and lawfulness, and also data minimisation and purpose limitation);
- appropriate pre-contract diligence and contract terms with processors;
- ensuring data subjects can exercise their rights, including as regards automated decision-making; and
- carrying out data protection impact assessments (DPIAs) for high-risk use cases.
Currently, where solely automated decision-making (including profiling) that has legal or similarly significant effects on individuals is deployed, this needs to:
- have the decision justified by a specific legal basis (namely, performance of contract, legal obligation or explicit consent);
- fulfil transparency and accountability requirements;
- be supported by easy ways to request human intervention or to challenge a decision; and
- have accuracy and bias checks, with feedback loops into design improvements.
Solely automated decision-making underpins, and is a core function of, GenAI systems. Where such processing uses special category data (e.g. health data), further stricter safeguards are required under EU GDPR/UK GDPR.
Future regulatory developments
The EU and UK have taken different approaches to regulating AI. The EU has proposed detailed regulation through the EU AI Act, which places additional obligations on users of "high-risk" AI systems. These include any systems that involve automated processing of personal data to assess various aspects of a person's life. In an employment context, this could include using AI to place targeted job advertisements or evaluate candidates.
Employers using GenAI to assess candidates or employees will be subject to additional obligations, which include designing the system for human oversight and establishing a quality management system for compliance. The EU AI Act will require HR functions to take a more proactive approach to compliance and ensure that AI systems are used in a responsible and ethical manner.
In contrast, the previous UK government stopped short of proposing specific and detailed regulation for AI generally or focusing on the use of such technologies in the workplace.
However, the recent King’s Speech outlined how the new Government would “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”. The new Government has proposed introducing:
- the Cyber Security and Resilience Bill – this aims to give greater power to regulators to compel more organisations to deploy better cybersecurity safeguards and introduce more detailed reporting requirements for the purpose of enhanced threat identification; and
- the Digital Information and Smart Data Bill – this focuses on “innovative” uses of data to encourage economic growth and also establishes a statutory digital verification regime.
The second Bill will also bolster the UK Information Commissioner’s Office’s regulatory powers, together with clarifying data laws, so as to ensure that new technologies, including GenAI, can be safely developed and deployed.
Top 10 key considerations for employers
To effectively manage and mitigate the risks identified above, employers should:
- Carefully assess the use case for GenAI, ensuring that humans retain ultimate responsibility for and oversight of critical decisions.
- Evaluate the training data used to train the GenAI system, focusing on accuracy and the inclusion of diverse populations.
- Conduct testing and auditing of GenAI systems to identify and address bias, ensuring that any necessary corrections are integrated into the algorithm.
- Incorporate appropriate contractual protections and conduct AI-focused due diligence when engaging with GenAI suppliers.
- Adopt clear and comprehensive internal policies governing employee usage of GenAI tools.
- Address data protection compliance requirements, including providing clear privacy information, conducting lawful basis assessments, implementing DPIAs, and incorporating human review processes and other safeguards.
- Prepare for complaints and data subject requests, ensuring effective handling and review processes are in place to address concerns and respond to access requests.
- For global organisations, refine your strategy on whether to follow the strictest standards set by the EU AI Act throughout the entire organisation or to take a more local approach and leverage more permissive regimes.
- Take steps to effectively train and upskill employees for successful adoption of GenAI and seek their buy in by engaging with them early in the adoption process.
- Stay informed about the evolving regulatory landscape, particularly the newly enacted EU AI Act, and anticipate further guidance from EU regulators. The UK's approach is still evolving, particularly with the recent change of Government, and so its alignment with European initiatives remains to be seen.
To find out more about how about how Deloitte Legal is helping our clients to harness the potential of AI visit: Artificial Intelligence & Generative AI: The impact for legal departments.
Content from the Deloitte Legal blog can now be sent direct to your inbox. Choose the topic and frequency by subscribing here.