1. Legal Characterisation of AI
In China, AI is governed through a layered and risk-based regulatory framework rather than as a single consolidated legal category. This regime is anchored in generally applicable foundation laws — the Cybersecurity Law, Data Security Law, and PIPL — which impose baseline obligations on data handling, system security, and cross-border transfers.
Building upon this foundation, China has developed a suite of targeted administrative rules. The Generative AI Measures establish the primary framework for content safety and security assessments, supported by the Deep Synthesis Provisions governing AI-generated materials that could obscure reality, and the AI Content Labeling Measures introducing differentiated explicit and implicit labeling obligations.
Regulatory attention has shifted toward advanced interactive risks. The draft Interim Measures for Anthropomorphic AI Interactive Services (December 2025) target AI systems that simulate human emotions, personalities, or social interactions, aiming to mitigate risks of user manipulation, emotional dependency, and cognitive autonomy.
Beyond these, sector-specific regulators in education, healthcare, and automotive continue to publish specialised guidelines for sensitive domains. A unified Artificial Intelligence Law remains under legislative discussion but its introduction timeline is uncertain.
Regarding liability attribution, China has no specialised statutory regime. Liability is determined under general tort law and the Civil Code. Courts and regulators increasingly evaluate whether providers have fulfilled their duty of care and implemented reasonable technical safeguards, rather than relying on boilerplate disclaimers. Under Article 69 of the PIPL, a fault-presumption standard applies if AI-related harm involves personal information infringement.
2. Relationship to Higher-Level Law
Higher-level statutes — the PIPL, Cybersecurity Law, and Data Security Law — stipulate overarching obligations extending beyond AI-specific rules. These mandate strict protection of personal data, cybersecurity, and data security throughout the AI lifecycle. Traditional frameworks governing intellectual property and fair competition under the Civil Code, Copyright Law, and Anti-Unfair Competition Law apply equally to AI operations.
AI-specific regulations apply the principles of these laws by setting clear rules for the industry. Requirements for algorithm transparency and content labeling protect users' right to know. The duty to block illegal content continues China's long-standing content safety standards.
Under the Generative AI Measures and PIPL, any generative AI services offered to the public within China must comply with Chinese laws. If an AI system is accessible and usable in China without restrictions (such as IP blocks or phone number verification), it may be deemed as targeting the Chinese market and subject to mandatory regulatory requirements.
Due to this "targeting" standard, cross-border AI deployments must adhere to Chinese regulations whenever their service scope involves China — even if training data, computing resources, and model infrastructure are located across multiple foreign jurisdictions.
4. Legal Bases Relied on in Practice
The Cyberspace Administration of China (CAC) serves as the primary regulator for both personal information protection and AI supervision. Local CAC branches have enforced several AI-related cases grounded in AI-specific legislation. The November 2024 "Qinglang" campaign targeted: (i) generative AI products providing public services without required filing or registration; (ii) dissemination or sale of tutorials and tools for unauthorised GenAI development; (iii) inadequate management of training data; and (iv) propagation of AI-generated illegal content such as rumours.
AI-specific legislation does not directly specify liabilities for specific illegal activities. Instead, these regulations require penalties in accordance with other relevant laws such as the Cybersecurity Law, Data Security Law, and PIPL, with criminal liabilities pursued if the violation constitutes a crime.
Chinese courts have adjudicated landmark AI cases focusing on intellectual property rights (copyrightability of AI outputs, protection of personality/voice rights), personal information infringements, and "AI-assisted cheating" involving computer system interference. Unlike administrative enforcement, civil rulings typically rest on general statutes rather than AI-specific rules.
5. Liability Allocation Across the AI Chain
There are no laws specifically addressing liability allocation for harm caused by AI systems. Responsibility is determined based on general civil tort law and the specific circumstances of each case. The party responsible for damage is determined based on the fault that caused the harm — attributed to developer, deployer, or user. If multiple parties contribute, they are held individually liable; where individual responsibility cannot be determined, joint and several liability may be imposed.
The Civil Code permits parties to limit or exclude liability by contract, but clauses excluding liability for personal injury or property damage caused by wilful misconduct or gross negligence are deemed invalid. In AI services, service terms likely constitute standard terms — if providers fail to draw attention to liability limitation clauses, they may be deemed not incorporated.
There is ongoing debate regarding defective AI systems. One view holds product liability under the Product Quality Law should apply on a no-fault basis. Others argue this may lead to unlimited expansion of liability and advocate a fault-based regime with strict liability only in limited circumstances expressly provided by law.
6. Regulatory Guidance and Soft Law
AI-related requirements are further clarified through national standards and technical guidelines. The mandatory standard GB 45438-2025 specifies detailed methods for labeling AI-generated content, including explicit labels and implicit labels embedded in metadata. The TC260-003 guidelines serve as an important reference in the LLM launch filing security assessment process. The recommended standard GB/T45654-2025 refines these requirements with additional provisions for on-device large models and refusal mechanisms.
7. Data, Inference, and Automated Decision-Making
Under Article 7 of the Generative AI Measures, providers must: (i) use data from lawful sources; (ii) refrain from infringing intellectual property; (iii) obtain valid consent for personal information; (iv) improve data quality for training; and (v) comply with applicable requirements. TC260-003 further elaborates on source security, content safety, and annotation security.
Under PRC law, automated decision-making (ADM) is regulated from perspectives of personal information protection and algorithm governance. PIPL Article 73 defines ADM as activities carried out through computer programs that automatically analyse and evaluate an individual's behavioural patterns, interests, or economic status. Where ADM involves personal information, the PIPL requires: (i) transparent and fair ADM; (ii) non-personalised options for information push and commercial marketing; and (iii) the right to request explanation and refuse decisions made solely by ADM where they have material impact.
8. AI in Employment
Under the Algorithm Recommendation Regulations, where algorithms provide work allocation or scheduling, providers must safeguard workers' rights including remuneration and rest, and establish algorithmic mechanisms governing order allocation, remuneration structure, working hours, and rewards. AI in hiring and performance management must avoid discriminatory outcomes — judicial practice makes clear that factors like region or gender unrelated to job requirements cannot serve as a basis for employment decisions. Employers using AI for material decisions must comply with PIPL ADM requirements including the right to explanation, challenge, and human intervention.
9. Points of Legal Friction
Ownership of AI-Generated Content: The current Copyright Law is premised on a human authorship paradigm. Whether AI-generated content qualifies as a "work" and whether rights should be attributed to developer, user, or another party remains unsettled. Divergent judicial approaches have emerged across cases.
Liability for Defective AI Systems: Traditional tort law requires a clear line of causation between conduct and harm. The "black box" nature of AI makes it difficult for victims to establish causation. No consensus exists on whether to apply fault-based or strict product liability.
10. Legislative Developments
According to the 2025 Legislative Work Plan, a unified AI Act is unlikely in the near future. Key pending regulations include:
AI Ethics Measures (August 2025): Ethics review requirements for AI scientific and technological activities posing ethical risks in life and health, human dignity, the ecological environment, and public order.
Anthropomorphic Interaction AI Measures (December 2025): Draft rules targeting AI services that simulate human personality traits, thinking patterns and communication styles to engage in emotional interaction. Require user notification and intervention mechanisms to mitigate risks of confusion, emotional dependence, or physical harm.