深度伪造技术涉淫秽物品犯罪的法律适用困境与检察应对
深度伪造涉淫秽犯罪的法律适用困境与立法演进
这些文献侧重于探讨深度伪造色情制品(NCID)在各国现有刑法体系下的适用瓶颈,分析法律真空并提出具体的立法改革建议或比较法研究。
- The ‘new voyeurism’: criminalizing the creation of ‘deepfake porn’(Clare M. S. McGlynn, Rüya Tuna Toparlak, 2025, Journal of Law and Society)
- Deepfake Threats And EU Law: Navigating Disinformation, Cyber Violence, And The Risks Of Digital Manipulation 1(Mariam Makhniashvili, 2026, SSRN Electronic Journal)
- Bridging Legal Gaps in AI-Generated Deepfake Pornography: A Comparative Approach to Privacy and Digital Ethics(Gefy Caesarati Zumarno, Sri Jaya Lesmana, Ratna Indayatun, 2026, Jurnal Ius Constituendum)
- Non-consensual deepnudes: responses under EU law to a novel form of sexual abuse(S. Schmitz-Berndt, Nils Langensteiner, M. E. Kalpakos, 2026, International Review of Law, Computers & Technology)
- Artificial Intelligence and Sexual Offences: An Analysis of Deepfake Pornography in Light of Criminal Law(Túlio Felippe Xavier Januário, 2025, Teisė)
- Legal Protection of Revenge and Deepfake Porn Victims in the European Union: Findings From a Comparative Legal Study(Karolina Mania, 2022, Trauma, Violence, & Abuse)
- Deepfake and non-consensual pornography: recent iterations of the gendered battle for rights in a photograph(Jessica Lake, 2024, A Research Agenda for Intellectual Property Law and Gender)
- The EU Approach to Non-Consensual Sexual Deepfakes: Criminal Law, Tech Regulation and the Risk of Fragmentation(Federica Fedorczyk, 2025, European Criminal Law Review)
- Producing and/or Distributing Intimate Images of a Person without its Consent(Сергей Николаевич Клоков, Павел Александрович Тихонов, 2023, Legal Issues in the Digital Age)
- Threats and regulatory challenges of non-consensual pornographic deepfakes: an analysis of the colombian case(H. Guerrero-Sierra, Marcela Palacio Puerta, Daniel Felipe Garavito Rincón, 2025, Cogent Social Sciences)
- The Weaponisation of Artificial Intelligence (AI): Legal Shortfalls and Regulatory Difficulties in Governing Non-Consensual Intimate Deepfakes (NCIDs)(Joshua Ward, 2025, Preprints.org)
- Addressing Deepfake Pornography and the Right to be Forgotten in Indonesia: Legal Challenges in the Era of AI-Driven Sexual Abuse(Angelica Vanessa, Audrey Nasution, ·. Suteki, A. Lumbanraja, 2025, International Journal for the Semiotics of Law - Revue internationale de Sémiotique juridique)
深度伪造犯罪中的证据获取与技术检测研究
该组文献集中关注深度伪造内容在司法实践中的证明难题,包括数字证据的鉴定、鉴伪技术的开发以及取证流程的挑战。
- Digital Face Forgery and the Role of Digital Forensics(Manotar Tampubolon, 2023, International Journal for the Semiotics of Law - Revue internationale de Sémiotique juridique)
- The challenges of Digital Evidence usage in Deepfake Crimes Era(.. Mohamed Hassan Mekkawi, 2023, Journal of Law and Emerging Technologies)
- A Systematic Literature Review of Deepfakes in Forensic Science(Jenifer Loovens, H. Tınmaz, 2025, Forensic Imaging)
- Detecting deep fake evidence with artificial intelligence: A critical look from a criminal law perspective(F Palmiotto, 4384, Available at SSRN 4384122)
- Your Honor, Video Lies: Deepfakes and the Future of Authenticating Digital Evidence in Criminal Procedure(Ronny Lee, 2026, … : Deepfakes and the Future of Authenticating Digital …)
- Application of Deep Learning in Digital Evidence Collection and Authentication(Jinlong Gao, 2025, Advances in Economics and Management Research)
- 空间域增强的通道自适应深度伪造图像检测方法(李佳林, 沈哲, 2025, 计算机辅助设计与图形学学报)
- Empirical Assessment of Deepfake Detection: Advancing Judicial Evidence Verification Through Artificial Intelligence(Ebrima Hydara, Masato Kikuchi, Tadachika Ozono, 2024, IEEE Access)
生成式人工智能的刑事责任与治理伦理
这些研究探讨了AI技术作为犯罪工具的本质属性,如归责难题、算法责任、以及如何平衡技术创新与人格权保护的宏观治理框架。
- Generative AI and criminal law(Beatrice Panattoni, 2025, Cambridge Forum on AI: Law and Governance)
- 技术滥用与权益衡平:中日深度合成技术侵害人格权规制比较研究(张译丹, 2026, 环球社科评论)
- AI换脸技术的应用风险及法律规制(刘文涛, 2023, 电子科技大学学报(社会科学版))
- Blurred realities: Legal strategies for the deepfake era(Luca Ettore Perriello, 2026, Maastricht Journal of European and Comparative Law)
- Deepfake Technology and Gender-Based Violence: A Scoping Review.(Lisa Lazard, Rose Capdevila, Emma L. Turley, Kathryn Gilfoyle, Nelli Stavropoulou, 2025, Trauma, Violence, & Abuse)
- It’s Not Porn, It’s Sexual Abuse: A Scoping Review of Sexual Deepfakes Public Opinions, Perpetration, and Harms(Jacinto G. Lorca, 2025, Violence and Gender)
- Cyber Crime or Technological Epidemic? Intersecting the Criminalization of Sexual Deepfake in Domestic and International Law(Evnat Bhuiyan, Shariful Islam, Abdullah al-Mamun, Asraf Uddin, 2025, OALib)
- A feminist legal analysis of non-consensual sexualized deepfakes: contextualizing its impact as AI-generated image-based violence under EU law(Anastasia Karagianni, Miriam Doh, 2024, Porn Studies)
- DEEPFAKE AI AND CRIMINAL LAW: A NEW AGE THREAT TO WOMEN’S SAFETY(Srishti Sehgal, 2025, LawFoyer International Journal of Doctrinal Legal Research)
- When non-consensual intimate deepfakes go viral: The insufficiency of the UK Online Safety Act(Beatriz Kira, 2024, Computer Law & Security Review)
- #MeToo in an AI-generated deepfake sexual violence era in South Korea(SeungGyeong Ji, 2025, Women's Studies International Forum)
- 人工智能犯罪与我国对策研究(高建新, 孙锦平, 蔡瑜坤, 王崇鹏, 杨燕燕, 王凯悦, 2025, 中国科学院院刊)
该研究集合从三个维度揭示了深度伪造技术在刑事司法中的复杂挑战:首先是传统法律体系对新型AI色情犯罪的定义滞后与立法重塑需求;其次是数字取证技术在法庭证据采信中的关键性支撑作用;最后是关于AI归责逻辑、人格权保护与技术伦理的宏观治理探讨。这些文献为检察机关在打击此类犯罪时的法律适用、证据收集与制度规制提供了理论与实践参考。
总计32篇相关文献
人工智能技术的高速发展,不断催生新场景、新模式和新市场,改变了信息和知识的生成方式,但技术暴露出的算法偏见、数据泄露、虚假内容生成、不当利用等安全风险也极易引发各类新型犯罪,现有形势下的法律规制与技术防范仍存在漏洞,为犯罪打击带来严峻挑战。为有效应对我国人工智能犯罪的新挑战,需补充完善现有法律规范,提升技术防范能力,加强监管与人才培养,扩大国际合作范围,稳步提升人工智能犯罪防范能力。
AI换脸源于深度伪造技术,涉及大量人脸信息,兼具数据属性。AI换脸的逼真度高、操作门槛低,获得了广泛的应用。但是该技术的滥用导致了较高的著作权、人格权等私权利侵犯风险、信息安全风险和犯罪防控风险。目前域外对AI换脸技术的法律规制主要存在分散式立法规制和统一立法规制两种模式。我国尚未建立系统化的规制体系,今后应当从以下方面进行完善:一是明确以区分应用场景前提下的合理使用为基本规制原则;二是构建事前、事中规制体系,明确研发者的技术伦理和制作者的标识义务、声明义务,保障信息主体的知情权和同意权,加强传播平台的内容审查义务;三是构建以数据、算法为核心的监管体制;四是严格民事法律责任追究,加大刑法制裁力度。
深度合成技术广泛应用催生“AI换脸”等新型人格权侵害行为,挑战传统人格权保护体系。本研究采用规范分析、案例与功能比较法,系统比较中日两国在侵害行为认定、责任主体划分等方面的法律应对路径。研究发现,中国形成“专门规则+民法典原则”规制模式,以《互联网信息服务深度合成管理规定》为标志,强调事前预防;日本采取“多元法律+判例法理”综合治理路径,以《人工智能相关技术研究开发及应用推进法》为依托,侧重事后救济与平台自主防治。两国规制理念存在侧重点差异:中国更强调权利保护的前置性与系统性,日本更注重技术创新与权利保护的个案平衡。在强化平台义务等具体制度设计上,两国可相互借鉴。建议中国在《民法典》司法解释中细化“严重精神损害”认定标准,探索引入类似“披露令”的诉讼工具,实现技术发展与人格权保护的动态平衡。
针对现有的深度伪造检测方法缺少关注图像空间域信息及模型复杂度较高的问题, 提出一种使用空间域特征增强的通道自适应深度伪造图像检测方法.首先提取图像空间域信息并归一化空域特征图, 将空域信息作为第4个通道传入网络; 其次, 在主干网络中加入SE-Layer模块, 对4个通道的权重进行重建, 解决通道间的异构性问题; 最后, 设计了半自动的预训练策略, 进一步提升了模型训练效率和准确率.以深度伪造人脸检测为例, 在7个不同来源的数据集上进行检测实验.结果表明, 即使在未使用数据增强技术的情况下, 文中方法优于基线方法, 在效果最差的StyleGAN2数据集上准确率也达到99.60%, AP指标达到98.00%.
… The AI-generated sexual violence of South Korea has … sexual content. Despite the groundwork laid by the South Korean #MeToo movements, I observe that most victims of deepfake …
… Deepfake technology facilitates … by deepfake pornography in Indonesia encounter significant obstacles in asserting their right to be forgotten, as specified in the Law on Sexual Violence …
The purpose of this study is to examine the challenges posed by AI-generated deepfakes. They are one form of so-called 'synthetic media', which draw on advances in AI, using algorithms and deep learning to change elements of a photo, video or audio track, or to recreate a person's voice or face with lifelike subtlety. creating the illusion of staged actions by someone else. 3 Earlier versions of generative AI required coding capabilities and technical proficiency; today, anyone with internet access is constrained only creativity is required. 4 The goal of this paper is to explore critical areas where EU legislative regulation is essential to protect fundamental rights. This research explores deepfake narratives, their impact on elections, social engineering, and harassment, and how the European Union (EU) is responding through policy measures and legal frameworks. Additionally, it examines the legal accountability of deepfake creators and platforms, particularly in cases of image-based sexual abuse and political deception. By analyzing the intersection of Directive (EU) 2024/1385 with freedom of expression, this study aims to evaluate the effectiveness of EU strategies in defending truth and combating disinformation. Since false information falls into three categories-disinformation, misinformation, and malinformation-deepfakes are classified as disinformation. 5 Additionally, this study will discuss the AI Act and Digital Services Act, assessing their roles in regulating deepfake-generated content, ensuring transparency, and addressing platform accountability within the EU's legal framework.
… campaigns - akin to the UK's "Revenge Porn Helpline" initiative- should communicate the message that sharing or viewing deepfakes constitutes participation in sexual violence. …
This paper explores the legal implications of non-consensual sexual deepfakes, specifically analyzing whether the creation, distribution, exhibition, or possession of such content involving adults or minors could be considered sexual offenses under the Portuguese law. By applying a deductive methodology, the study reviews Portuguese, European, North American, and Brazilian legal frameworks, doctrines, and case law related to sexual crimes and Artificial Intelligence (AI), applying them to the issue of deepfakes.The research begins by discussing the increasing prevalence and impact of non-consensual sexual deepfakes, a form of digital manipulation where AI is used to superimpose a person’s face onto someone else’s body in fabricated explicit content. With the rise of easily accessible deepfake technology, the realism of such videos has made it increasingly difficult to distinguish them from authentic material, resulting in significant harm to victims, both adults and minors. This includes emotional and reputational damage, as well as severe psychological consequences such as PTSD, anxiety, and depression, particularly among adults, and developmental harm in the case of minors. The research also highlights the alarming potential for deepfakes to depict child sexual abuse, exacerbating concerns about the exploitation of minors.The study investigates whether non-consensual sexual deepfakes could be classified as criminal offenses under the Portuguese law. It finds that, while the current legal provisions effectively address deepfakes involving minors, a distinction must be made between two types of virtual child pornography: fully virtual representations, and those involving partially real images of minors. Only the latter, where real children’s images are used, should be classified as a crime, as it violates the minor’s development conditions.For deepfakes involving adults, however, the research identifies a legal gap. Existing offenses, such as ‘computer fraud’, ‘illicit recordings or photographs’, or ‘aggravated defamation’, may cover some aspects of deepfake cases, but they do not adequately address the unique harm caused by non-consensual sexual deepfakes. The paper argues that a new criminal offense should be introduced specifically to protect sexual privacy and prevent the creation or distribution of non-consensual sexual deepfakes involving adults. Drawing on the Brazilian legal framework as a model, the study suggests the implementation of tailored provisions that criminalize these acts in order to safeguard the victims’ sexual privacy.The study concludes by emphasizing the need for a legal reform that would balance technological advancements with individual rights. While the paper cautions against excessive expansion of criminal law or a moralistic approach to sexual crimes, it advocates for the creation of specific legal protections against non-consensual sexual deepfakes, while acknowledging the enduring societal impact of this issue. These solutions would help address the harmful effects of deepfakes and provide better legal safeguards for victims of this rapidly evolving digital phenomenon.
… sexualized deepfakes are distributed. To better understand how sexualized deepfakes are … After having captured the harms of deepfake technology, legal solutions will be provided …
… sexual deepfake as a distinct offence. For portraying the importance of … sexual deepfake, the authors analyzed various examples of AI induced sexual violence, existing laws on deepfake…
Online violence against women (OVAW) is a growing global problem with deepfakes in gender-based violence as one manifestation of this that has recently attracted considerable attention. This scoping review aims to explore emerging complexities in current academic understandings of deepfake in relation to its use in gender-based violence. The review considers how these issues impact and shape what is currently known about deepfakes in relation to OVAW. Articles were collected between July and September 2024 and then filtered drawing on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Review guidelines. Six research databases were searched using 12 search terms compiled by three of the article authors. This resulted in a total number of 3148 articles that were filtered to identify 397 articles that were reviewed in full. The subset was further filtered in order to focus on psychology and the social sciences resulting in a total of 64 articles for analysis. As psychology and the social sciences begin to capture the implications of deepfake creation and dissemination, in the context of online sexual violence, there is a need to investigate how deepfakes are used to silence women in public spaces online, as well as empirically acknowledging the inherent gendered systemic discriminations within deepfake technology and its uses. While important, research must move beyond perceived credibility and detection techniques of deepfakes and toward an analysis of intersectional power dynamics at play in this form of gender-based violence.
The use of images of persons in a pornographic context (without the prior consent of the person concerned) on the internet is an increasingly widespread infringement. Unlawful activities carried out with the use of generated images and artificial intelligence are a variant of this phenomenon. “Revenge porn” and “deepfake porn” illustrate the inadequacy of legal systems vis a vis the fast-changing reality. Using the comparative law method, a comparison was made between the current laws of nine EU Member States to create a map of protection for victims of revenge porn. As the results showed, in three of the studied countries there is a separate incrimination of revenge porn; however, the conceptual scope of its definition is significantly different and it is these differences that determine the legal way for the victims to assert their rights. This article is a comparison of the current legal regulations of selected European Union countries and the means of legal protection used by the victims. The text presents the differences occurring in the legal systems adopted in the countries subject to analysis, as well as an assessment of possible solutions at the legal and technological level to face the existing problem.
This article examines the legal, regulatory and societal challenges posed by deepfake technology, situating its analysis within a comparative framework spanning the European Union, United States and China. It explores the multifaceted harms of deepfakes – from non-consensual pornography and political disinformation to financial fraud and identity manipulation – and analyses their rapid dissemination through online ecosystems that undermine both individual dignity and democratic trust. The study assesses the EU Artificial Intelligence Act, highlighting its transparency-based approach, definitional boundaries and classification of deepfake systems as ‘limited-risk’, while identifying contexts that may warrant high-risk or prohibited status. It underscores the limitations of transparency obligations in addressing malicious actors, cross-border disinformation and intimate image abuse, and examines the complementary roles of the Digital Services Act, the General Data Protection Regulation and the EU Directive on combating violence against women in regulating different stages of the deepfake lifecycle. Ultimately, the article argues for a multi-layered, adaptive governance model that reconciles the protection of rights, dignity and democratic integrity with the preservation of legitimate innovation in AI-driven creativity and communication.
The significant harm caused by non-consensual sexual deepfakes is now well-established. Nevertheless, it was only with Art. 5 Sec. 1(b) of the recent Directive (EU) 2024/1385 that the EU mandated Member States to criminalise conducts related to non-consensual sexual deepfakes. However, many national criminal codes across Europe do not yet criminalise such acts. This paper critically examines the reasons behind this “regulatory gap”: it provides an overview of the phenomenon and its legal framework, with the intention to demonstrate that it has been – and continues to be – largely overlooked both by national criminal law and by the EU in its strategies for regulating AI and emerging technologies. It argues that as long as these two branches of law continue to operate on separate tracks, the root causes of such misconduct will remain insufficiently addressed. The paper concludes that effective solutions require a stronger framework than the one adopted by Directive (EU) 2024/1385 and – drawing on the comparative findings – recommends that EU Member States, in transposing the Directive, adopt more stringent provisions.
This study aims to examine the legal gaps in regulating AI-generated deepfake pornography and to develop a comparative regulatory model to strengthen privacy protection and digital ethics in Indonesia. The rapid advancement of artificial intelligence has facilitated the creation and dissemination of non-consensual synthetic pornographic content, posing serious threats to individual privacy, dignity, and psychological well-being, while existing Indonesian laws remain fragmented and inadequate. This research employs normative legal research using statutory, conceptual, and comparative approaches, focusing on Indonesian regulations and comparative frameworks from California and South Korea. The findings reveal that current Indonesian legal instruments, including the Electronic Information and Transactions Law, the Pornography Law, and the Personal Data Protection Law, do not explicitly regulate deepfake pornography, resulting in legal uncertainty, enforcement challenges, and insufficient victim protection. In contrast, comparative jurisdictions provide clearer definitions, consent-based standards, and comprehensive victim remedies. This study proposes a regulatory reform model based on lex specialis principles, digital ethics, victim-centered protection, and adaptive legal governance to address AI-based digital crimes. The originality of this research lies in its integrative comparative framework that connects criminal law, personal data protection, and digital ethics to formulate a comprehensive and future-oriented legal response. These findings contribute to the development of responsive legal policies and provide a normative foundation for regulating synthetic media to safeguard digital privacy and human dignity in the artificial intelligence era.
This article presents a scoping review of empirical research on public perceptions, the perpetration and harms associated with sexual deepfakes, nonconsensual explicit content fabricated through generative technologies. Using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology, 14 peer-reviewed studies retrieved from three academic databases were analyzed. The literature predominantly focuses on Anglo-Western contexts, resulting in a significant absence of research from culturally diverse settings. The findings are organized into three main topics and related subthemes: (i) “public opinions,” which includes perceptions of sexual deepfake content, motivations, harms, construction of perpetrators’ identity, and the role of digital platforms; (ii) “perpetration,” which addresses the sociotechnical aspects of the sexual deepfakes, such as the accessibility of generative nudatory technologies and its relationship with the cultural frameworks surrounding the abuse; and (iii) “impacts and harms,” which details the real, embodied consequences experienced by victims and survivors and the barriers they face. The review identifies three critical areas of debate: (1) terminological debates surrounding sexual deepfakes; (2) differences between survey-based and online studies; and (3) empirical gaps and directions for research. Future research is encouraged to employ intersectional, survivor-centered, culturally grounded, and prevention-focused methodologies to better understand how sexual deepfakes are produced and experienced.
… problems of deepfake and non-consensual pornography. … Deepfake technology might be new but the ways in which it is … to prosecute consensual pornography, as non-consensual. …
Modern times have created different types of new crimes unknowable to the criminal law doctrine before. One of these new crimes is unlawful distribution of intimate images of person in public without its consent, including distribution in Internet. In the world practice this action usually named as “nonconsensual porn”. Nowadays this type of unlawful actions is actively studied in foreign law systems, some of these recently criminalized it; however in the Russian law “nonconsensual porn” is not popular theme for researching in doctrine and also in practice, although the act itself exists. Dispositions of a number of articles of Chapters 19 and 25 of Special part of the Criminal Code of Russian Federation only partially cover the act mentioned; therefore, the need to change the law is already brewing due to the need of modernization of criminal legislation in connection with various ways of committing such a crime. Focusing on the ways of committing the researched act, authors identify and explore three ways of creating “nonconsensual porn”: its production by secret shooting, the production of intimate images of a person with the consent of the person himself and the production of “nonconsensual porn” by using computer technologies. Authors also made an attempt to differentiate the studied act with the already existing crimes of the Special Part of the Criminal Code (Articles 128.1, 137, 242, etc.). The subject of that research is “nonconsensual porn” as an unlawful act. The aim of the research is creating the complex model of offence of “nonconsensual porn” in Russian criminal law system and explanation of necessity of criminalization this act as an independent crime. The need of protection of people’s rights from “nonconsensual porn” especially by criminal law because of the danger of that act, differentiation “nonconsensual porn” from other crimes and need of criminalization of that act in the Russian criminal law is proving by authors. Present research provides significant thesis for developing of study of criminal law and formulate drafts in the Russian Criminal Code, what gives the practical meaning to the work.
Abstract The proliferation of non-consensual pornographic deepfakes has raised ethical, legal, and social concerns worldwide. This form of gender-based violence violates fundamental rights such as sexual freedom, dignity, privacy, and reputation, and disproportionately affects women. This article examines the regulatory challenges posed by non-consensual pornographic deepfakes in Colombia, where current policies and legal responses remain fragmented and insufficient. Using Lawrence Friedman’s tripartite model of legal systems, which considers legal structure, legal substance, and legal culture, the study identifies institutional limitations, legal gaps, and cultural barriers that undermine effective victim protection. Drawing on comparative legal frameworks from jurisdictions that criminalize synthetic sexual content, the paper proposes guiding principles for adopting clear criminal legislation in Colombia. It argues that reform must extend beyond criminal definitions to include institutional coordination, victim support services, and public awareness campaigns, offering a comprehensive and multidisciplinary response. Based on a documentary review of legal, doctrinal, and news sources, the analysis concludes that Colombia urgently needs legislation tailored to online sexual violence, while also embracing technological and educational measures to mitigate harm. This approach aims to build a rights-based legal response to an unregulated digital environment and to protect victims’ dignity and autonomy.
This article examines the increasingly prevalent threat posed by non-consensual intimate deepfakes (NCIDs), AI-generated sexually explicit content which resembles a real person, and critiques the current legislative framework in the UK, which fails to criminalise the creation of NCIDs. While the Online Safety Act (OSA) 2023 criminalises the distribution of NCIDs, the simple act of creating NCIDs for sexual gratification or future criminal activity remains lawful. Utilising interdisciplinary research, victim testimony, and a comparative analysis with similar legislation within the European Union (EU), this article submits that the current legislation in the UK fails to protect victim-survivors and overlooks the serious harms caused by the creation of NCIDs. Instead, we propose a strict liability model that focuses on a lack of consent rather than a defendant’s mens rea, aligning NCID offending with the broader context of image-based sexual abuse (IBSA). This article concludes that legislative reform is needed immediately to criminalise the creation of NCIDs, close legal loopholes and, most importantly, protect the dignity, privacy and sexual autonomy of victim-survivors.
… with non-consensual intimate imagery (ie ‘revenge pornography… and dissemination of deepfake pornography, this paper … of deepfake pornography as ‘high-risk’ under the AI Act. …
… focus on potential misinformation harms, ‘non-consensual intimate deepfakes’ (NCID) – a … that the law should mandate all AI-powered deepfake creation tools to ban the generation of …
Lawmakers around the world are turning their attention to deepfake sexual abuse to reduce its prevalence and provide redress to victims. Thus far, criminal law reforms have tended to focus on the distribution of this material, with far less attention given to targeting the root cause – namely, creation and solicitation. Accordingly, we provide the first comprehensive analysis of sexually explicit deepfake creation. We explore the distinct harms of creation, including the ‘invisible threat’ of deepfake sexual abuse now pervading the lives of all women and girls. ‘Sexual digital forgeries’ is suggested as a more appropriate term that better recognizes the nature and harms of this form of abuse. We justify the deployment of criminal sanctions, advancing the idea that this phenomenon should be understood as the ‘new voyeurism’. The laws in jurisdictions that currently criminalize creating sexually explicit deepfakes are examined, together with law reform options being considered in England and Wales. We recommend that legislators act with urgency, adopting a comprehensive approach to criminalizing creation.
Several criminal offenses can originate from or culminate with the creation of content. Sexual abuse can be committed by producing intimate materials without the subject’s consent, while incitement to violence or self-harm can begin with a conversation. When the task of generating content is entrusted to artificial intelligence (AI), it becomes necessary to explore the risks of this technology. AI changes criminal affordances because it creates new kinds of harmful content, it amplifies the range of recipients, and it can exploit cognitive vulnerabilities to manipulate user behavior. Given this evolving landscape, the question is whether policies aimed at fighting Generative AI-related harms should include criminal law. The bulk of criminal law scholarship to date would not criminalize AI harms on the theory that AI lacks moral agency. Even so, the field of AI might need criminal law, precisely because it entails a moral responsibility. When a serious harm occurs, responsibility needs to be distributed considering the guilt of the agents involved, and, if it is lacking, it needs to fall back because of their innocence. Thus, legal systems need to start exploring whether and how guilt can be preserved when the actus reus is completely or partially delegated to Generative AI.
This research paper discusses the challenges of digital evidence usage in Deepfake crimes era in both the Egyptian and US Legislations. There is no doubt about the importance of keeping pace with the Law with behaviors that pose a threat to fundamental interests that deserve protection, especially in an era when information technology is instantaneously accelerating towards the creation of many modern technologies that raise many concerns, since artificial intelligence algorithms have helped to think about a large number of issues that did not exist a few years ago, such as the ease of processing big data and simultaneous machine translation, and one of those algorithms is Deepfake, which was classified as the most dangerous among artificial intelligence algorithms on Cybersecurity threats. With the complexity of Investigations of computer related crimes, due to the obstacles in gathering the evidence. The researcher seeks, after discussing the essence of digital evidence, stating its types, forms, characteristics, sources, principles, and challenges facing its application, as well as comparing between the laws regulating the digital evidence nationally, internationally concerning (Budapest convention) and The US federal rules of digital evidence, to present and set recommendations to reduce the risks and challenges of these crimes, and to assist the legislator in addressing the shortcomings in Egyptian laws. Keywords: Digital evidence, Deepfake, Cybercrime, Digital privacy & Cybersecurity.
Deepfake technology poses a profound challenge to the integrity of facial evidence in criminal justice, threatening the authenticity and admissibility of such evidence in the courtroom. In this research, a specialized deepfake detection system tailored for facial evidence verification was developed, aiming to counteract the influence of deepfake technology. The proposed system integrates a unique combination of video-frame selection, confidence thresholds, prediction timestamps, and heat maps for individual frames of suspect videos. This methodological fusion is designed to support forensic analysts by enhancing the reliability and trustworthiness of video evidence used in judicial settings. Our comprehensive evaluation involved diverse user groups participating in experimental scenarios to assess the effectiveness of the system. The results indicated that the combined features of the system significantly enhanced the detection of fabricated evidence, fostering high levels of confidence and trust among users. Moreover, this study delves into the legal and ethical considerations surrounding the deployment of deep fake-detection technologies, underscoring the necessity for legal frameworks to evolve in response to emerging digital threats. By addressing both the technical and jurisprudential challenges, this research contributes to safeguarding the evidential value of facial recognition in the judicial process against the disruptive potential of deepfake technologies.
… digital evidence in the age of deepfakes, the legitimacy of outcomes will be questioned even when the evidence … identifying an item of audiovisual evidence, the proponent must produce …
… Investigates the implications of deepfakes in forensic science, emphasizing their ethical, … Highlights the threat deepfakes pose to the credibility of digital evidence and the broader …
… deep fake detection to identify the origin of a deep fake. This may include investigating additional forensic evidence… herein, digital forensic experts have examined digital evidence using …
… to identify manipulated content. This paper critically assesses the use of “deep fake … posed to forensic experts asked to authenticate the digital evidence. Deep fakes, however, pose …
This paper explores the application of deep learning technology in the field of digital evidence collection and authentication. The study focuses on the use of deep learning in image, video, audio, text, and network traffic analysis, and analyzes the main challenges faced, including data insufficiency and interpretability issues. The paper introduces the design of deep learning-based digital evidence analysis systems, including system architecture, core algorithms, and performance optimization strategies. Through case studies, particularly in obscene content detection and minor facial recognition, the paper demonstrates the high accuracy and efficiency of deep learning models. The findings indicate that deep learning technology has brought revolutionary changes to digital evidence analysis, significantly improving accuracy and efficiency, and providing strong support for maintaining social security and judicial fairness.
该研究集合从三个维度揭示了深度伪造技术在刑事司法中的复杂挑战:首先是传统法律体系对新型AI色情犯罪的定义滞后与立法重塑需求;其次是数字取证技术在法庭证据采信中的关键性支撑作用;最后是关于AI归责逻辑、人格权保护与技术伦理的宏观治理探讨。这些文献为检察机关在打击此类犯罪时的法律适用、证据收集与制度规制提供了理论与实践参考。