UK jobseekers are being auto-rejected by AI – and then asked to do a video interview anyway
UK jobseekers are being auto-rejected by AI hiring managers, but still being asked to do a video interview as part of the interview process.
Legal firm Browne Jacobson noted that some candidates are experiencing a recruitment process that can be increasingly impersonal and, in many cases, unlawful.
In one case, a third-year university student described applying for over 100 jobs and receiving a rejection less than two minutes after submission. Before, presumably, any human could have read her application.
The typical automated journey involves AI CV screening, testing, and AI video interviews where candidates record responses with no human on the other end.
The introduction of AI to oversee hiring processes can also be an exercise in frustration for jobseekers – and again lead to unlawful practices. A ‘cost-efficiency first’, plus AI (or machine learning) driven practice has emerged where candidates are algorithmically rejected at the testing stage and then invited to complete a video interview anyway.
This is likely unlawful on multiple grounds, Browne Jacobson said. It involves collecting personal data, including facial expressions and vocal tone, that is excessive where the rejection decision is already made.
“Organisations that may be primarily fishing for candidate applications and video capture content, predominantly to train AI models or building talent insight databases may be at risk of regulatory investigation, as well as data subject complaint,” the firm said.
“Such practices may also breach the fairness and transparency principle under Article 5(1)(a) of the UK GDPR, as the video stage is presented as genuine when the outcome is already determined at the testing stage. The immediate rejection that follows without human review further violates Article 22 of the UK GDPR.”
It added that when human review is cursory or inconsistent, the safeguards required by UK GDPR must apply.
Those safeguards include informing candidates that ADM is being used (or considering whether it is appropriate, in the absence of a data privacy or AI risk or conformity assessment e.g. to identify and eliminate any bias or discrimination, as well as consider any associated reputational risk associated with the use of such tools), offering the right to request human review, and ensuring candidates can contest the decision. Few organisations appear to be meeting all three.
The firm noted that algorithmic tools trained on historical data can replicate and entrench existing patterns of discrimination.
Without active monitoring, employers may not know their tools are producing discriminatory outcomes until a complaint or claim arrives, by which point they may struggle to successfully defend the claim.
“With the Data (Use and Access) Act 2025 now in force and the ICO consulting on updated ADM guidance, the legal framework is shifting. Clients who act now will be better placed to respond to final guidance and avoid enforcement,” the firm said.