×

Open WeChat and scan the QR code
Subscribe to our WeChat public account

HOME Overview Professional Fields Industry Fields Professionals Global Network News Publications Join Us Contact Us Subscribe CN EN JP
HOME > Global Network > Shanghai > Publications > Professional Articles > Criminal compliance for AI-generated content

Criminal compliance for AI-generated content

Author: Ekin Zeng & Weiming Wu 2024-08-14

AI-generated content represents a novel production method using generative AI to automatically create content. As generative AI becomes more widely adopted, compliance issues surrounding the content that it generates are increasingly under scrutiny. This article analyses potential criminal risks associated with AI-generated content, focusing on the types of liable parties and the definition of liabilities in various scenarios.


Causes of inappropriate content


The creation of AI-generated content is based on generative AI training, which involves data collection and preprocessing, model building and training, and content generation and output. Generated content may contain inappropriate content under the following circumstances.


Undesirable data. If training data contains malicious information or tag errors, the model may learn and generate similar content;


Deficient model architecture. Lack of mechanisms for detecting and filtering harmful information may lead to the generation of inappropriate content;


Improper input. If the user inputs malicious content, generative AI may incorporate such context in its responses; and


Filtering and monitoring. The absence of content filtering and other post-processing steps may result in direct output of harmful information.


Liable parties


There are three types of liable parties involved in the industrial chain of AI-generated content:


Generative AI service providers, referring to organisations or individuals that provide these services;


Technical supporters, referring to organisations or individuals that provide technical support for generative AI services; and


Service users, referring to organisations or individuals that use generative AI services to create, reproduce, publish or disseminate information.


Generally, service users are not a specific user group and do not constitute a liable party.


Definition of liabilities


The analysis of criminal risks mainly targets service providers and technical supporters. They are held liable for harmful information contained in AI-generated content under different circumstances.


Undesirable training data or model architecture. Technical supporters may be held criminally liable for using illegal data or foundation models, substandard quality of data labelling, or failure to comply with security requirements.


Lack of filtering and monitoring. Service providers are obliged to implement filtering and monitoring mechanisms to prevent output of harmful information. Failure to do so may result in criminal liability.


Improper input. Should service users generate harmful content through improper input, the liability of service providers and technical supporters depends on their attitude towards users’ behaviour and their own regulatory measures. Based on China’s theory of accomplice and crimes added to the Criminal Law by amendment 9 – such as assisting information network criminal activities, failing to fulfil information security management obligations, and illegally using information networks – accomplices in the traditional sense are now regarded as principal offenders.


As it becomes challenging for service providers and technical supporters to get off the hook on the grounds of technology neutrality, the core issue lies in their subjective awareness and collusion to a crime, which has imposed higher compliance requirements on service providers and technical supporters. Even if there is no identified collusion on a specific crime, service providers and technical supporters may still be incriminated on presumption of knowledge, given the role of network and AI products in facilitating criminal acts.


Article 9 of the Interim Measures for the Administration of Generative Artificial Intelligence Services mandates that service providers bear the responsibility of network information content producers. Article 14 stipulates that providers promptly halt the generation and transmission of any illegal content they find, remove such content and report it to relevant authorities.

Additionally, they must undertake model optimisations and other measures for rectification. If users are found to have engaged in illegal activities using generative AI services, providers must issue warnings, restrict related functions, suspend or even terminate their services, properly preserve relevant records, and report to the authorities.


Article 10 of the Provisions on the Administration of Deep Synthesis of Internet-Based Information Services also requires providers of deep synthesis services to enhance management of deep synthesis content, and to review the user input data and the synthesis output by either technical or manual means.

Deep synthesis service providers must establish and maintain databases to identify illegal and harmful information. They must improve database standards, rules and procedures, with relevant network logs properly kept and preserved.


On discovering any illegal or harmful information, providers of deep synthesis services must take appropriate measures, preserve relevant records, and promptly report the discovery to the Cyberspace Administration and other relevant authorities. They must also take action against the users concerned, including issuing warnings, restricting functions, suspending services and closing accounts.


It is evident that Chinese regulatory authorities have imposed stringent obligations on service providers of AI-generated content. In cases of illegal content creation by generative AI that results from service providers’ failure to fulfil the above-mentioned obligations, the authorities will mandate corrective actions on the part of service providers.


Conclusion


The rapid development of AI-generated content technology has introduced innovation and convenience, as well as compliance challenges. Establishing criminal liability for product providers is crucial for balancing innovation with legal risks. Within the evolving legal framework, it is crucial that service providers, technical supporters and service users develop their compliance awareness and mitigate risks.