NHS England Withdraws Open-Source Code Amid AI Security Fears, Drawing Criticism from Experts

12

NHS England has issued urgent directives to withdraw all publicly available software code from open platforms like GitHub, citing heightened cybersecurity risks posed by advanced artificial intelligence. This sudden shift reverses long-standing government policy that encouraged transparency and reuse of public-sector software. However, security experts argue that the move is a reaction to media hype rather than a genuine threat, labeling it counterproductive and logically flawed.

The Shift from Transparency to Secrecy

For years, NHS England followed standard UK government guidelines that required software developed with public funds to be open-source. This approach allowed other organizations to inspect, reuse, and improve upon the code, reducing duplication of effort and fostering innovation.

In a significant policy reversal, new internal guidance mandates that all source code repositories must be private by default. Staff have been instructed to make existing public repositories private by May 11. The guidance states that public access will only be granted if there is an “explicit and exceptional need” that has been formally approved.

The directive explicitly cites Mythos, an AI model developed by Anthropic, as the primary catalyst for this change. NHS England’s internal memo warns that public repositories increase the risk of exposing architectural details and configuration data that could be exploited by AI models capable of large-scale code analysis and inference.

“This red line establishes a default-closed posture for code while the organisation assesses the impact of these changes and ensures that any public publication of code is a deliberate, reviewed, and justified decision.”

The Mythos Controversy: Hype vs. Reality

The urgency stems from recent reports that Mythos could identify vulnerabilities in virtually any software. However, independent analysis suggests the threat may be overstated for robust systems.

The UK government-backed AI Security Institute (AISI) investigated the capabilities of Mythos and concluded that it is primarily effective against “small, weakly defended and vulnerable enterprise systems.” The institute found no evidence that secure, well-maintained software networks would be at significant risk from this specific AI tool.

Despite these findings, NHS England has opted for a precautionary lockdown, temporarily restricting access to strengthen cyber security while they assess the broader impact of rapid AI advancements.

Why This Matters: The Cost of Closing Doors

Critics argue that withdrawing code now is not only unnecessary but also detrimental to the principles of open government and security best practices.

1. Contradiction with Government Standards
The new measures directly contradict the NHS Service Standard, which emphasizes that public services built with public money should have their code available for reuse. This standard aims to save taxpayer money by preventing teams from rebuilding existing solutions.

2. Transparency and Trust
Open-source software fosters public trust and accountability. A stark example of the value of transparency is the Post Office Horizon IT scandal. Had the Horizon system’s code been public, the flaws that led to the wrongful prosecution of hundreds of subpostals might have been identified and addressed years earlier, preventing a major miscarriage of justice.

3. The “Many Eyes” Security Principle
Terence Eden, a veteran of the UK Civil Service specializing in open data, argues that open-source software is inherently more secure because it can be reviewed by a global community of developers. He suggests that the current panic is driven by fear rather than fact.

“Is it possible that Mythos will scan a repository and find a bug? Yes, 100 per cent likely. Is that going to be a bug that causes a security issue in a live NHS service somewhere? Almost certainly not,” Eden says. “I think it’s someone in NHS England buying into the hype… and getting a bit panicked.”

The Futility of “Bolted Doors”

Security experts point out a critical logical flaw in the NHS’s strategy: the code is already out there. Since the software has been publicly available for years, copies exist in countless backups, downloads, and archives across the internet. Removing it from GitHub does not erase it from existence.

Eden describes the move as “bolting the stable door after the horse has gone.” He notes that most NHS software is not critically sensitive, and the effort to privatize it yields negligible security benefits while incurring significant administrative costs and damaging collaborative relationships with the tech community.

Conclusion

NHS England’s decision to privatize its software code appears to be a reactive measure driven by fear of emerging AI capabilities rather than a calculated security strategy. While the intention to protect patient data is clear, experts warn that this approach undermines transparency, contradicts government efficiency standards, and offers little practical protection against determined attackers. As the organization continues to assess the AI landscape, the debate over balancing security with open governance remains unresolved.