The National Cyber Security Centre (NCSC) – a part of GCHQ – has shared critical insights cautioning cyber security professionals against comparing prompt injection and more classical application vulnerabilities classed as SQL injection.
A new blog advises that, contrary to first impressions, prompt injection attacks against generative artificial intelligence applications may never be totally mitigated in the way SQL injection attacks can be.
Unlike SQL mitigation techniques, which hinge on enforcing a clear separation between data and instructions, prompt injection exploits the inability of large language models (LLMs) to distinguish between the two.
Without action addressing this misconception, the NCSC warns, websites risk falling victim to data breaches exceeding those seen from SQL injection attacks in the 2010s, impacting UK businesses and citizens into the next decade.
Backing proactive adoption of cyber risk management standards, the NCSC challenges claims that prompt injections can be ‘stopped’.
Instead, it suggests efforts should turn to reducing the risk and impact of prompt injection and driving up resilience across AI supply chains.
As AI technologies become embedded in more UK business operations, the NCSC calls on AI system designers, builders and operators to take control of manageable variables, acknowledging that LLM systems are “inherently confusable” and their risks managed in different ways.
You can read the blog in full here.
To read more NCSC news, click here.



