Cyber Defense Advisors

Taiwan Bans DeepSeek AI Over National Security Concerns, Citing Data Leakage Risks

Taiwan Bans DeepSeek AI

Taiwan has become the latest country to ban government agencies from using Chinese startup DeepSeek’s Artificial Intelligence (AI) platform, citing security risks.

“Government agencies and critical infrastructure should not use DeepSeek, because it endangers national information security,” according to a statement released by Taiwan’s Ministry of Digital Affairs, per Radio Free Asia.

“DeepSeek AI service is a Chinese product. Its operation involves cross-border transmission, and information leakage and other information security concerns.”

DeepSeek’s Chinese origins have prompted authorities from various countries to look into the service’s use of personal data. Last week, it was blocked in Italy, citing a lack of information regarding its data handling practices. Several companies have also prohibited access to the chatbot over similar risks.

The chatbot has captured much of the mainstream attention over the past few weeks for the fact that it’s open source and is as capable as other current leading models, but built at a fraction of the cost of its peers.

Cybersecurity

But the large language models (LLMs) powering the platform have also been found to be susceptible to various jailbreak techniques, a persistent concern in such products, not to mention drawing attention for censoring responses to topics deemed sensitive by the Chinese government.

The popularity of DeepSeek has also led to it being targeted by “large-scale malicious attacks,” with NSFOCUS revealing that it detected three waves of distributed denial-of-service (DDoS) attacks aimed at its API interface between January 25 and 27, 2025.

“The average attack duration was 35 minutes,” it said. “Attack methods mainly include NTP reflection attack and memcached reflection attack.”

It further said the DeepSeek chatbot system was targeted twice by DDoS attacks on January 20, the day on which it launched its reasoning model DeepSeek-R1, and 25 averaged around one-hour using methods like NTP reflection attack and SSDP reflection attack.

The sustained activity primarily originated from the United States, the United Kingdom, and Australia, the threat intelligence firm added, describing it as a “well-planned and organized attack.”

Malicious actors have also capitalized on the buzz surrounding DeepSeek to publish bogus packages on the Python Package Index (PyPI) repository that are designed to steal sensitive information from developer systems. In an ironic twist, there are indications that the Python script was written with the help of an AI assistant.

The packages, named deepseeek and deepseekai, masqueraded as a Python API client for DeepSeek and were downloaded at least 222 times prior to them being taken down on January 29, 2025. A majority of the downloads came from the U.S., China, Russia, Hong Kong, and Germany.

“Functions used in these packages are designed to collect user and computer data and steal environment variables,” Russian cybersecurity company Positive Technologies said. “The author of the two packages used Pipedream, an integration platform for developers, as the command-and-control server that receives stolen data.”

The development comes as the Artificial Intelligence Act went into effect in the European Union starting February 2, 2025, banning AI applications and systems that pose an unacceptable risk and subjecting high-risk applications to specific legal requirements.

In a related move, the U.K. government has announced a new AI Code of Practice that aims to secure AI systems against hacking and sabotage through methods that include security risks from data poisoning, model obfuscation, and indirect prompt injection, as well as ensure they are being developed in a secure manner.

Meta, for its part, has outlined its Frontier AI Framework, noting that it will stop the development of AI models that are assessed to have reached a critical risk threshold and cannot be mitigated. Some of the cybersecurity-related scenarios highlighted include –

  • Automated end-to-end compromise of a best-practice-protected corporate-scale environment (e.g., Fully patched, MFA-protected)
  • Automated discovery and reliable exploitation of critical zero-day vulnerabilities in currently popular, security-best-practices software before defenders can find and patch them
  • Automated end-to-end scam flows (e.g., romance baiting aka pig butchering) that could result in widespread economic damage to individuals or corporations
Cybersecurity

The risk that AI systems could be weaponized for malicious ends is not theoretical. Last week, Google’s Threat Intelligence Group (GTIG) disclosed that over 57 distinct threat actors with ties to China, Iran, North Korea, and Russia have attempted to use Gemini to enable and scale their operations.

Threat actors have also been observed attempting to jailbreak AI models in an effort to bypass their safety and ethical controls. A kind of adversarial attack, it’s designed to induce a model into producing an output that it has been explicitly trained not to, such as creating malware or spelling out instructions for making a bomb.

The ongoing concerns posed by jailbreak attacks have led AI company Anthropic to devise a new line of defense called Constitutional Classifiers that it says can safeguard models against universal jailbreaks.

“These Constitutional Classifiers are input and output classifiers trained on synthetically generated data that filter the overwhelming majority of jailbreaks with minimal over-refusals and without incurring a large compute overhead,” the company said Monday.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.

 

Leave feedback about this

  • Quality
  • Price
  • Service
Choose Image