Leading risk‑based vulnerability management specialist, Tenable has released advice to businesses and its employees on how to avoid Data Exposures when using DeepSeek in response to the recent public exposure of private data linked to using the Gen AI tool. This underscores the pressing issue of rapid innovation in GenAI and the increasing risk of data breaches to organisations.
The release of DeepSeekv3 and its more powerful DeepSeek-R1 as open source large language models increase accessibility to anyone around the world. The challenge, however, is that unlike closed source models, which operate with guardrails, local large language models are more susceptible to abuse, Tenable says.
Satnam Narang, Senior Staff Research Engineer, Tenable, comments “DeepSeek has taken the entire tech industry by storm for a few key reasons: first, they have produced an open source large language model that reportedly beats or is on-par with closed-source models like OpenAI’s o1. Second, they appear to have achieved this using less intensive computing power due to limitations on the procurement of more powerful hardware through export controls.
“We don’t know yet how quickly DeepSeek’s models will be leveraged by cybercriminals. Still, if the past is prologue, we’ll likely see a rush to leverage these models for nefarious purposes.”
To help organisations protect themselves, Tenable outlines four essential steps to mitigate AI-related risks:
1. Establish a cautious approach when using DeepSeek
Organisations must proceed with caution when using DeepSeek in their environments or building AI applications based on it. This involves thoroughly assessing potential risks such as data leakage and more, and taking necessary precautions to prevent misuse.
2. Monitor for anomalies and unauthorised AI usage
Monitoring solutions can rapidly detect the unauthorised use of DeepSeek’s website enabling organisations to quickly close AI-related exposures and strengthen their overall security posture.
3. Adopt a formal AI governance framework
Establish an AI governance board or council to create clear policies and procedures that guide the usage, development, deployment, and monitoring of AI. Utilise appropriate tools to support these objectives, including capabilities to identify AI services and applications, monitor unauthorised AI usage, conduct code reviews, audit models, and ensure compliance with ethical and regulatory standards. OWASP and NIST have developed guidelines that provide essential insights and best practices for managing AI-specific vulnerabilities during the development of AI and LLMs.
4. Educate employees on AI usage risks and company policies
Educate employees on AI usage guidelines, potential threats, and the organisation’s specific policies. This includes how to responsibly handle AI-generated output, recognise social engineering tactics, and securely manage data associated with AI models. Ensuring employees are well-informed empowers them to identify, prevent, and report any misuse of AI tools or vulnerabilities.
AI Security Threats Only Going in One Direction
Tenable data highlights that in 2023, misconfigurations and unsecured databases were responsible for 5.8% of breaches, yet these factors contributed to 74% of the 7.6 billion exposed records. These findings emphasise the devastating impact of misconfigurations on sensitive and personally identifiable information.
Large language models with cybercrime in mind typically improve the text output used by scammers and cybercriminals seeking to steal from users, through phishing schemes, financial fraud or to help create malicious software.
There is also a growing concern these models could also be used to develop entirely new, novel malware and to discover zero-day vulnerabilities. Cybercriminal-themed tools like WormGPT, WolfGPT, FraudGPT, EvilGPT and the newly discovered GhostGPT have been sold through cybercriminal forums.
“While it’s still early to say, I wouldn’t be surprised to see an influx in the development of DeepSeek wrappers, which are tools that build on DeepSeek with cybercrime as the primary function, or see cybercriminals utilise these existing models on their own to further expand their tools to best fit their needs,” concluded Narang.
For more cybersecurity news, click here