The Criminal Risks of Generative AI Content Dissemination: A Case Study of the First Domestic Case of ChatGPT Fabricating False Information

Authors

  • Peichen Huang

DOI:

https://doi.org/10.54691/sx2pnb41

Keywords:

Generative AI; The Spread of False Information; Risk of Criminal Involvement; The ChatGPT False Information Case; Application of Law.

Abstract

Currently, generative AI is deeply integrated into the field of information dissemination. While its ability to generate automated content enhances efficiency, it also gives rise to legal and criminal risks such as the spread of false information. The first domestic case of ChatGPT fabricating false information has further pushed such risks to the forefront of judicial practice. However, the current legal framework is not adapted to the characteristics of AI technology. There are gaps in the determination of responsibility for AI false information and the application of legal provisions, which leads to ambiguous rights and responsibilities in practice and compliance difficulties in case handling. This article sorts out the core facts and disputes of the case, analyzes the difficulties in the application of the law through qualitative analysis, and finds that the current supervision lags behind in regulating AI-generated content and lacks targeted provisions. It proposes that legislative research should be strengthened and the integration of law and AI should be promoted to achieve collaborative governance. This study offers a new perspective on the legal liability determination of generative AI, fills the theoretical gap in judicial practice, builds a complete regulatory framework, and also provides empirical references and practical guidance for policymakers to improve AI supervision.

Downloads

Download data is not yet available.

References

[1] Zhang S.H., Li K. (2024) Research on the Risk and Governance of False Information in Generative Artificial Intelligence. Academic Exploration, (07): 129–140.

[2] Tang Y., Wang Y.T., Xu D.T. (2024) Legal Regulation of ChatGPT-like Artificial Intelligence from the Perspective of False Information Governance. Journal of Hubei University of Education, 41 (04): 34–39.

[3] Nie T. (2023) Legal Risks and Compliance of ChatGPT Generative AI. Internet World, (03): 29–33.

[4] Cheng L. (2023) Legal Regulation of Generative Artificial Intelligence: From the Perspective of ChatGPT. Political Science and Law Forum, (04): 69–80.

[5] He X. (2023) Legal Risks of Generative AI and Responses. Journal of Northwest University for Nationalities (Philosophy and Social Sciences Edition), (04): 89–98. DOI: https://doi.org/10.14084/j.cnki.cn62-1185/c.2023.04.008.

[6] Fu H., Fu L.R. (2023) On the Application Fields of the Crime of Fabricating and Intentionally Spreading False Information: From the Perspective of Distinguishing It from the Crime of "Network-based" Provoking Trouble. Journal of Sichuan Institute for Nationalities, 32 (03): 72–78, 97. DOI: https://doi.org/10.13934/j.cnki.cn51-1729/g4.2023.03.006.

[7] Yang J.W., Luo F.Y. (2024) The Operating Mechanism, Legal Risks and Regulatory Paths of ChatGPT-like Generative Artificial Intelligence. Administration and Law, (04): 101–115.

[8] Jia J. (2021) Research on the Application of Criminal Charges in Cases of Fabricating and Intentionally Spreading False Information Online: From the Perspective of Distinguishing between the Crime of Fabricating and Intentionally Spreading False Information and the Crime of Provoking Trouble. Theoretical Issue, (01): 146–153. DOI: https://doi.org/10.14180/j.cnki.1004-0544.2021.01.019.

[9] Zhang Y.G., An R. (2021) The Judicial Application of the Crime of Fabricating and Intentionally Spreading False Information: Current Situation Reflection and Path Optimization. Qilu Academic Journal, (03): 107–117.

Downloads

Published

2026-02-12

Issue

Section

Articles

How to Cite

Huang, Peichen. 2026. “The Criminal Risks of Generative AI Content Dissemination: A Case Study of the First Domestic Case of ChatGPT Fabricating False Information”. Scientific Journal Of Humanities and Social Sciences 8 (2): 45-52. https://doi.org/10.54691/sx2pnb41.