Politics

MPs are being flooded with ChatGPT-written letters from UK constituents

Ryan Brothwell 3 min read
MPs are being flooded with ChatGPT-written letters from UK constituents

Key Points

  • A House of Commons Library briefing published on 5 May 2026 warns MPs' offices are receiving a rising volume of constituent letters drafted by ChatGPT and similar AI tools.
  • Author Chris Rhodes says these letters often contain inaccurate explanations of law, misused legal terminology, and unrealistic requests that mask the constituent's real issue.
  • Indicators of AI-drafted text include nested bullet lists, American spellings in UK correspondence, vague fluency and unsupported claims.
  • Parliament advises staff against pasting sensitive or personal information into AI tools because it is unclear how providers store or reuse the data.
  • The Library recommends caseworkers reply to the themes of an enquiry and restate concerns in plain English rather than answering each technical question.

Constituents are increasingly using ChatGPT and similar AI tools to draft their letters to MPs, a new House of Commons Library briefing warns.

The briefing, published on Tuesday (5 May) by Commons Library researcher Chris Rhodes, notes that AI-drafted enquiries often arrive as long lists of technical questions framed in legal or procedural language.

Rhodes said the messages usually reflect genuine constituent concerns, but they take longer to interpret and respond to than ordinary correspondence.

The Commons Library, which provides impartial research for MPs and their staff, considers the trend significant enough to warrant formal guidance for parliamentary offices and a place in its wider good information toolkit.

AI-drafted casework can include inaccurate but convincing explanations of law or policy, misused legal terminology, and unrealistic requests that would be difficult to address fully.

Rhodes warns that this style of message can mask the underlying issue a constituent actually faces, making it harder for an MP’s office to identify what support it can usefully provide.

He added that AI tools produce information that sounds plausible but is incorrect, incomplete or misleading, a failure mode the industry calls hallucination.

How Parliament suggests spotting AI-written text

The briefing lists several indicators that text may come from an AI tool.

Rhodes said that none of these signals is definitive on its own, and that carefully edited AI output can disguise them.

Indicator What to look for Notes
Nested lists Frequent bullet points, often with sub-bullets Common in ChatGPT output but also normal in genuine briefings
American spelling “color” or “emphasize” appearing in UK correspondence Strong signal when the sender otherwise writes in British English
Vague fluency Smooth prose with few concrete details or examples Suggests the author has little personal context for the issue
Unsupported claims Confident assertions made without references Often sound authoritative but cannot be verified
Loose citations References tangentially connected to the point made A known feature of AI hallucinations
Factual errors Wrong dates, names or institutional roles Worth checking against primary sources before relying on the text


The Library also advises caseworkers against treating AI detection tools as conclusive, as their accuracy is often unreliable.

It instead recommends focusing on whether claims rely on evidence and whether sources are reputable and verifiable.

What this means for constituents

Rhodes recommends that caseworkers respond to the themes of an enquiry rather than to each technical question in turn, and that they ask clarifying questions or restate the constituent’s concern in plain English to confirm what the person really wants.

This approach should help MPs’ offices reach the underlying issue more quickly, but it also means a letter drafted entirely by an AI tool may take longer to resolve than a clearly written personal account.

The briefing also reminds parliamentary staff that AI tools should not handle sensitive, personal or confidential information, because it is unclear how providers store or reuse data submitted in prompts.

The same warning applies to constituents who paste personal details into ChatGPT to draft a letter, since that information could feed into future versions of the model.

Rhodes said that vague prompts tend to produce vague results, and that asking the AI to provide sources and checking each link remains the strongest safeguard against hallucinated content.

Watch: Starmer’s last-minute pitch to win voters back from Greens and Reform