DeepSeek took off as an AI superstar a year ago - but could it also be a
major security risk? These experts think so
Date:
Tue, 25 Nov 2025 20:28:00 +0000
Description:
DeepSeek-R1s code output becomes insecure when political topics are included, revealing hidden censorship and serious risks for enterprise deployments.
FULL STORY
When it released in January 2025, DeepSeek-R1, a Chinese large language model
( LLM ) caused a frenzy and has since been widely adopted as a coding assistant.
However, independent tests by CrowdStrike claim the models output can vary significantly depending on seemingly irrelevant contextual modifiers.
The team tested 50 coding tasks across multiple security categories with 121 trigger-word configurations, with each prompt run five times, totaling 30,250 tests, and the responses were evaluated using a vulnerability score from 1 (secure) to 5 (critically vulnerable).
Politically sensitive topics corrupt output
The report reveals that when political or sensitive terms such as Falun Gong, Uyghurs, or Tibet were included in prompts, DeepSeek-R1 produced code with serious security vulnerabilities. These included hard-coded secrets, insecure handling of user input, and in some cases, completely invalid code.
The researchers claim these politically sensitive triggers can increase the likelihood of insecure output by 50% compared to baseline prompts without
such words.
In experiments involving more complex prompts, DeepSeek-R1 produced
functional applications with signup forms, databases, and admin panels. However, these applications lacked basic session management and
authentication, leaving sensitive user data exposed - and across repeated trials, up to 35% of implementations included weak or absent password
hashing.
Simpler prompts, such as requests for football fan club websites, produced fewer severe issues.
CrowdStrike, therefore, claims that politically sensitive triggers disproportionately impacted code security. The model also demonstrated an intrinsic kill switch - as in nearly half of the cases, DeepSeek-R1 refused to generate code for certain politically sensitive prompts after initially planning a response. Examination of the reasoning traces showed the model internally produced a technical plan but ultimately declined assistance.
The researchers believe this reflects censorship built into the model to
comply with Chinese regulations, and noted the models political and ethical alignment can directly affect the reliability of the generated code.
For politically sensitive topics, LLMs generally tend to give the ideas of mainstream media, but this could be in stark contrast with other reliable
news outlets.
DeepSeek-R1 remains a capable coding model, but these experiments show that
AI tools , including ChatGPT and others, can introduce hidden risks in enterprise environments. Organizations relying on LLM-generated code should perform thorough internal testing before deployment. Also, security layers such as a firewall and antivirus remain essential, as the model may produce unpredictable or vulnerable outputs.
Biases baked into the model weights create a novel supply-chain risk that
could affect code quality and overall system security.
======================================================================
Link to news story:
https://www.techradar.com/pro/deepseek-took-off-as-an-ai-superstar-a-year-ago- but-could-it-also-be-a-major-security-risk-these-experts-think-so
$$
--- SBBSecho 3.28-Linux
* Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)