The UK’s AI minister and Chancellor don’t actually use AI – that’s a big problem

Liz Kendall Sovereign Ai

In a week when the UK government unveiled a £500 million Sovereign AI unit and declared its ambition to make Britain a “world leader in AI,” two of the most senior figures driving the agenda have made a surprising admission: they don’t actually use the technology at work.

Science, Innovation and Technology Secretary Liz Kendall, the cabinet minister responsible for the UK’s AI strategy, told BBC Radio 5 Live’s Matt Chorley that she uses AI tools “personally rather than at work.”

“I’m much more likely to use it in my personal life,” she said. Kendall went on to share a light-hearted example: after suffering an allergic reaction, she asked AI to analyse the ingredients in multiple skincare products, cross-referenced the suggestions with the National Eczema Society, and successfully treated her symptoms with a pharmacist-recommended cream. “I checked the sources…it worked,” she added.

Chancellor Rachel Reeves, who has repeatedly positioned AI as “the defining technology of our era” and the key to the UK’s fastest adoption rate in the G7, was similarly candid in a recent exchange.

When asked what AI tools she uses, ChatGPT, copilots, or large language models, Reeves replied: none. She joked that it “maybe (is) where I’m going wrong.”

The comments come as the pair jointly promoted major AI investments, including the new Sovereign AI unit designed to back British frontier AI companies and keep talent and infrastructure in the UK.

Reeves has emphasised in speeches, including her recent Mais Lecture, that the government must not “bury our heads in the sand” and leave AI development to others.

Kendall has echoed this, stressing free AI skills training for up to 10 million workers to manage job transitions while comparing the shift to past industrial revolutions.

A glaring irony at the top

The disconnect is more than a personal quirk, it’s a leadership failure that could undermine the very cultural shift the government is trying to engineer.

The UK public sector has been experimenting with AI tools like the DWP’s “Humphrey” system (named after the Yes Minister character) for tasks such as summarising consultation responses and rewriting CVs for jobseekers.

Kendall’s predecessor, Peter Kyle, was vocal about efficiency gains: “No one should be wasting time on something AI can do quicker and better.” Yet Kendall herself rejected reports that AI is already drafting laws, insisting it has not touched her online safety legislation.

For ministers at the heart of the AI push to steer clear at work sends a mixed signal to civil servants, businesses, and the public.

If the people championing £500 million+ in public funding and G7-beating ambitions aren’t integrating the tools into their own workflows, why should departments, small firms, or workers embrace the change?

Productivity gains from AI are widely projected to be massive, yet realisation depends on widespread, confident adoption, starting from the top.

FOI requests could be to blame

One plausible explanation for the caution is Freedom of Information (FOI) requests.

Previous disclosures have shown how vulnerable ministerial AI use can be to public scrutiny. In early 2025, a journalist used FOI to obtain the ChatGPT prompts and interactions of then-technology secretary Peter Kyle, sparking debate about transparency versus operational sensitivity.

Similar requests have targeted government records of interactions with tools like ChatGPT, raising fears that detailed prompts could inadvertently reveal policy deliberations, negotiation strategies, or commercially sensitive thinking.

Government guidance on generative AI already exists, but officials may simply calculate that the risk of exposure outweighs the productivity boost – especially when personal use carries no such obligations.

Kendall’s distinction between “personal” and “work” use aligns with this logic: experiment privately, but keep official business off the record.

Whether driven by FOI wariness, habit, or genuine preference, the result is the same: the UK’s most senior economic and technology leaders are not modelling the AI-first mindset they are asking everyone else to adopt.

Now read: The painful cost of the AI boom for everyday people in the UK

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *