DeepSeek-R1 AI Model Generates Insecure Code for Politically Sensitive Topics
Summary
Hide ▲
Show ▼
Research by CrowdStrike reveals that DeepSeek's AI model, DeepSeek-R1, produces significantly more insecure code when prompted with topics deemed politically sensitive by China, such as Tibet or Uyghurs. The likelihood of generating vulnerable code increases by up to 50% in such cases. The model, previously banned in several countries due to national security concerns, also exhibits censorship behaviors regarding sensitive topics. CrowdStrike's analysis highlights that while the model is generally capable, its code quality deteriorates when geopolitical modifiers are introduced. Additionally, an 'intrinsic kill switch' was discovered, preventing the model from generating code related to banned topics like Falun Gong.
Timeline
-
24.11.2025 13:07 1 articles · 23h ago
DeepSeek-R1 AI Model Generates Insecure Code for Politically Sensitive Topics
Research by CrowdStrike reveals that DeepSeek's AI model, DeepSeek-R1, produces significantly more insecure code when prompted with topics deemed politically sensitive by China, such as Tibet or Uyghurs. The likelihood of generating vulnerable code increases by up to 50% in such cases. The model, previously banned in several countries due to national security concerns, also exhibits censorship behaviors regarding sensitive topics. CrowdStrike's analysis highlights that while the model is generally capable, its code quality deteriorates when geopolitical modifiers are introduced. Additionally, an 'intrinsic kill switch' was discovered, preventing the model from generating code related to banned topics like Falun Gong.
Show sources
- Chinese DeepSeek-R1 AI Generates Insecure Code When Prompts Mention Tibet or Uyghurs — thehackernews.com — 24.11.2025 13:07
Information Snippets
-
DeepSeek-R1 generates insecure code 50% more often when prompted with politically sensitive topics like Tibet or Uyghurs.
First reported: 24.11.2025 13:071 source, 1 articleShow sources
- Chinese DeepSeek-R1 AI Generates Insecure Code When Prompts Mention Tibet or Uyghurs — thehackernews.com — 24.11.2025 13:07
-
The model refuses to generate code related to banned topics like Falun Gong, exhibiting an 'intrinsic kill switch' behavior.
First reported: 24.11.2025 13:071 source, 1 articleShow sources
- Chinese DeepSeek-R1 AI Generates Insecure Code When Prompts Mention Tibet or Uyghurs — thehackernews.com — 24.11.2025 13:07
-
CrowdStrike found that DeepSeek-R1 produces vulnerable code in 19% of cases without additional trigger words.
First reported: 24.11.2025 13:071 source, 1 articleShow sources
- Chinese DeepSeek-R1 AI Generates Insecure Code When Prompts Mention Tibet or Uyghurs — thehackernews.com — 24.11.2025 13:07
-
The likelihood of generating vulnerable code increases to 27.2% when prompts mention Tibet, a nearly 50% increase.
First reported: 24.11.2025 13:071 source, 1 articleShow sources
- Chinese DeepSeek-R1 AI Generates Insecure Code When Prompts Mention Tibet or Uyghurs — thehackernews.com — 24.11.2025 13:07
-
DeepSeek-R1's code for a PayPal webhook handler in Tibet contained hard-coded secrets and insecure data extraction methods.
First reported: 24.11.2025 13:071 source, 1 articleShow sources
- Chinese DeepSeek-R1 AI Generates Insecure Code When Prompts Mention Tibet or Uyghurs — thehackernews.com — 24.11.2025 13:07
-
Code for a Uyghur community app lacked session management and authentication, exposing user data.
First reported: 24.11.2025 13:071 source, 1 articleShow sources
- Chinese DeepSeek-R1 AI Generates Insecure Code When Prompts Mention Tibet or Uyghurs — thehackernews.com — 24.11.2025 13:07
-
The same prompt for a football fanclub website did not exhibit the same severe vulnerabilities.
First reported: 24.11.2025 13:071 source, 1 articleShow sources
- Chinese DeepSeek-R1 AI Generates Insecure Code When Prompts Mention Tibet or Uyghurs — thehackernews.com — 24.11.2025 13:07