http://ipkitten.blogspot.com/2025/02/guest-post-in-too-deep-considering.html

Even by the standards of the dynamic world of AI development and innovation, the recent shockwaves caused by the surge in popularity of Chinese generative AI platform DeepSeek’s R1 model in January 2025 were unprecedented. The R1 model has emerged as a cost-efficient competitor to established market operators, such as OpenAI’s ChatGPT, and models based on the R1 model are now available on multiple mainstream platforms including AWS, Nvidia and GitHub. However data security concerns have seen certain government agencies ban its use, with European data protection authorities also taking steps to scrutinise DeepSeek and/or block access.

Richard Stebbing, Senior Associate at Withers LLP in its IPT team, has considered some of the potential legal risks for businesses and individual users when using DeepSeek.

Over to Richard:

“Data privacy concerns when using DeepSeek

DeepSeek is a Chinese AI software company. Direct use of its R1 model, such as via a DeepSeek mobile app, its web chat interface, or likely if integrating the model directly via a DeepSeek API, will mean that any information shared with DeepSeek (such as any conversations, prompts, media inputs, or account information) will be stored on servers located in China. DeepSeek confirms this in its privacy policy. According to the latest version of its English language privacy policy (at the time of writing, dated 5 December 2024) – “We store the information we collect in secure servers located in the People’s Republic of China”.

In too deep? 
The privacy policy also states that DeepSeek retains personal data for “as long as necessary to provide our Services and for the other purposes set out in this Privacy Policy” and that the purposes any data can be used by DeepSeek are wide-ranging, including any “training and improving” of its technologies, which may include developing new AI models. Potentially this could mean data is being retained in perpetuity beyond any actual use of services by the relevant user. As a Chinese corporate entity, DeepSeek also has legal obligations under any relevant Chinese cybersecurity laws, which could require state access to data held by DeepSeek.

Whilst it is possible to delete DeepSeek chat histories, thereby limiting the extent of data being retained, privacy experts have cautioned users to limit the amount of business sensitive or personal information being shared with DeepSeek given these far-reaching data retention and data use parameters.

In this context, several European data protection authorities have already commenced investigations into DeepSeek’s use of the personal data of EU data subjects. The Italian data protection authority, the Garante, known for being one of the more proactive data protection authorities, has gone further, blocking the DeepSeek app in Italy. This action was taken due to concerns centred on areas of DeepSeek’s privacy policy not demonstrating adequate compliance with the GDPR, such as regarding the length of data retention, the enforceability of data subject rights, and insufficient information regarding the legal basis for international data transfers of EU data subjects’ personal data, to servers located in China. It highly likely DeepSeek will need to comply with the results of any investigations from European data privacy authorities in order for its services to continue to be available in the EU market, however it remains to be seen what the requirements of the European data privacy authorities will be, and the extent to which DeepSeek effectively is required to overhaul its data privacy practices.

There are seemingly mitigations to these data privacy concerns, either where the R1 model being used is only processing data on-device, or, as the R1 model is open source, if a third party is deploying a version of the model hosted on servers suitably located, such as in the EU (meaning the model cannot send information beyond the geographical extent of these servers), as opposed to deploying the model directly via DeepSeek’s own APIs, which are connected to servers located in China. This mitigation is already reportedly being used by European startups, and offered by certain AI platform service providers, who are effectively offering the “indirect” use of the R1 model to their customers.

EU AI Act compliance and cybersecurity risks

Deployment of DeepSeek within businesses in Europe should also be considered in the context of cybersecurity and compliance with the EU AI Act, which is coming into force over the next 18 months. Leading AI governance analysts LatticeFlow recently published an evaluation of the R1 model which flagged several cybersecurity concerns which are also areas of potential non-compliance with the EU AI Act. These areas include goal hacking and prompt leakage – meaning where the model is prompted to divulge sensitive information – and bias in the results the model provides. Meanwhile Cisco has raised its own concerns regarding the model’s susceptibility to algorithmic jailbreaking and potential misuse.

Practically these vulnerabilities create an operational concern for businesses deploying AI solutions, particularly for businesses, or parts of businesses, involved in processing sensitive data, such as HR or financial data.

There are also EU AI Act compliance concerns for businesses developing solutions based on the R1 model, who would likely be classified as “deployers” of such solutions under the EU AI Act, and will face compliance requirements which may be incompatible with the use of the R1 model, particularly if the solution in development is classified as a “high-risk AI system”, meaning the deployer would face a number of onerous compliance requirements.

It will also be interesting to see if additional compliance measures are sought from AI regulators in relation to DeepSeek itself, particularly given that specific rules under the EU AI Act regarding the provision General-Purpose AI Models are due to come into force from 2 August 2025, which will introduce additional requirements on the providers of foundation models to mitigate risks relating to issues such as bias and jailbreaking.

Final thoughts

The AI landscape is incredibly fast-moving, and this can mean there is a lag between the availability of new AI technology solutions, and the application of existing regulatory frameworks and/or the introduction of new legislation governing the use of these new technologies. In this light, businesses and users should remain pragmatic when exploring new AI technology solutions, to avoid potential regulatory or legal compliance barriers (both now and in the future), and to avoid taking unnecessary risks with sensitive information which outweigh the benefits of using new AI technology solutions.

Please note this article (and any information accessed through links in this article) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this article.”

Further Reading

Content reproduced from The IPKat as permitted under the Creative Commons Licence (UK).