人工智能与罪犯认知
认知神经科学与精神医学在罪犯研究中的应用
这些文献侧重于通过神经科学、心理学及XAI等技术手段,探讨人类认知过程、执行功能、精神疾病(如反社会人格障碍)的识别与治疗,以及如何利用技术手段进行服刑人员的风险评估和矫正。
- XAI-Based Identification of EEG Features of Inhibition and Working Memory Activation(Pasquale Arpaia, Ciro Ivan de Girolamo, Matteo De Luca, A. Fullin, L. Gargiulo, Luigi Maffei, Nicola Moccaldi, R. Robbio, P. D. Blasiis, 2024, 2024 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE))
- 神经科学技术在刑事司法实践中的应用综述 - 汉斯出版社(Unknown Authors, Unknown Journal)
- The Problem of Mental Control: Neuroscience, Artificial Intelligence, Social Communications(D. Dubrovsky, 2024, Russian Journal of Philosophical Sciences)
- Emerging Therapies for Antisocial Personality Disorder: Psychotherapeutic and Technological Advances(X. Tan, 2025, Lecture Notes in Education Psychology and Public Media)
- 通用人工智能的哲学原理及算法实例(Unknown Authors, Unknown Journal)
AI 技术对人类心理、行为倾向及犯罪动机的影响
该组文献关注AI在交互过程中对个体产生的心理影响,包括AI诱导自杀、心理投射与移情、精神病态人格与AI滥用的关联,以及AI如何重塑未成年人的犯罪行为模式。
- Exploring The Legal Subjectivity of Artificial Intelligence in Incitement to Suicide(Zhaoxun Cao, Ramalinggam Rajamanickam, Nur Khalidah Dahlan, 2024, Jurnal IUS Kajian Hukum dan Keadilan)
- Ethical and legal aspects of artificial intelligence use in psychological practice: analysis and regulatory approaches(Il'ya Afanas'ev, Irina Afanas'eva, 2025, Scientific notes of P. F. Lesgaft University)
- Are Psychopathy Traits Related to the Use of Artificial Intelligence Tools Among University Students? The Mediating Effect of Self-control(Joaquín Rodríguez-Ruiz, Raquel Espejo-Siles, Inmaculada Marín-López, 2024, Deviant Behavior)
- Artificial Intelligence and its Impact on Criminal Behavior in Juveniles(Elda Shurdhi, 2026, Interdisciplinary Journal of Research and Development)
- Artificial Intelligence in Mental Control: the dialectical antilogies(V. Popov, 2024, Communicology)
人工智能法律主体地位与自由意志的理论争鸣
这一部分文献深入探讨了人工智能(尤其是强人工智能或通用人工智能)是否具备刑事责任主体资格,围绕“自由意志”、“辨认与控制能力”、“人工智能人格模型”等法理核心展开论述。
- 强人工智能刑事主体资格的法理批判与归责路径完善(Unknown Authors, Unknown Journal)
- 涉人工智能犯罪的刑法规制 - 汉斯出版社(Unknown Authors, Unknown Journal)
- 论人工智能刑事主体地位之否定 - 汉斯出版社(Unknown Authors, Unknown Journal)
- 人工智能法律主体地位的困境与出路(Unknown Authors, Unknown Journal)
- 人工智能法律主体资格之否定(Unknown Authors, Unknown Journal)
- 人工智能法律主体学说研究 - 汉斯出版社(Unknown Authors, Unknown Journal)
- 自主型人工智能犯罪的刑事归因与归责(Unknown Authors, Unknown Journal)
- 人形机器人运行中的刑事政策初探 - 汉斯出版社(Unknown Authors, Unknown Journal)
- NEW PROPOSED PERSONALITY MODEL FOR ARTIFICIAL INTELLIGENCE: INTEGRATED PERSONALITY(Şerafettin Ekici, 2025, Bilişim Hukuku Dergisi)
- Problems of Artificial Personality (Artificial Intelligence) Control(O. Gurov, A. V. Sherstov, 2023, Journal of Digital Economy Research)
具体 AI 犯罪场景下的刑事归责与立法应对
此类文献针对自动驾驶肇事、AI生成淫秽色情内容、AI诈骗、生成式AI侵权等具体风险,探讨了研发者、生产者、使用者及监管者的刑事责任分配、过失判定及刑罚适用问题。
- 人工智能体的刑事可罚性研究 - 汉斯出版社(Unknown Authors, Unknown Journal)
- Dylemat odpowiedzialności karnej z tytułu generowania pornografii przy wykorzystaniu sztucznej inteligencji(Marcin Niedbała, 2023, Krytyka Prawa)
- 自动驾驶车辆肇事刑事责任的认定与归责体系完善 - 汉斯出版社(Unknown Authors, Unknown Journal)
- 人工智能过失风险的刑法规制研究——以新过失论为理论视角(Unknown Authors, Unknown Journal)
- 人工智能刑事责任主体资格的审视 - 汉斯出版社(Unknown Authors, Unknown Journal)
- 利用生成式人工智能诈骗的责任主体认定 - 汉斯出版社(Unknown Authors, Unknown Journal)
- 弱人工智能时代的刑事责任主体和责任承担简析(Unknown Authors, Unknown Journal)
- 探讨人工智能时代行政执法问题 - 汉斯出版社(Unknown Authors, Unknown Journal)
- AI诈骗面临的刑法问题及其干预限度——以“福州市AI换脸诈骗”为例(Unknown Authors, Unknown Journal)
- 因果关系论视角下生成式人工智能提供者刑事责任研究 - 汉斯出版社(Unknown Authors, Unknown Journal)
- 涉人工智能犯罪的刑事责任主体 - 汉斯出版社(Unknown Authors, Unknown Journal)
- 论人工智能机器人的法律主体资格 - 汉斯出版社(Unknown Authors, Unknown Journal)
- 数字经济背景下新型犯罪问题的刑法保障 - hanspub.org(Unknown Authors, Unknown Journal)
该组论文全方位探讨了人工智能与罪犯认知之间的多维关系。研究领域从底层的认知神经科学(如EEG特征识别与反社会人格矫治)延伸至中层的行为心理学(如AI对个体犯罪动机的诱导与影响),并最终汇聚于宏观的法学理论,包括关于AI刑事主体地位的法理争议,以及在弱人工智能和强人工智能不同演进阶段下,针对具体技术风险(如自动驾驶、生成式内容)的刑事责任划分与监管策略。整体呈现出技术、心理与法学跨学科深度交织的特征。
总计34篇相关文献
第一,研发者和生产者要保证人工智能受控于人类,不会实施危害他人和社会的违法行为。人工智能具有自我学习的能力,一旦脱离人类控制,很有可能实施违法行为。所以,研发者和 ...
资格刑主要以公民的权利内容为处罚对象,人工智能没有公民资格,不具备适用资格刑的前提条件;最后,罚金刑不适用于人工智能。一方面,人工智能自身不具有独立财产,不满足适用 ...
本文探讨了弱人工智能时代刑事责任主体和责任承担的问题.随着人工智能技术的快速发展而带来的刑事风险对传统刑事责任理论提出了新的挑战.本文首先界定了弱人工智能 ...
学界普遍认为,弱人工智能仅可在既定的算法程序内运行,从而不能称之为刑事责任主体;关于强人工智能的刑事主体资格亦有诸多争议,其是否具备辨认和控制能力,及目前可否探讨 ...
不以犯罪为目的研发生产、销售或使用人工智能体,但却造成了严重后果时,应追究相关行为人的过失责任。 根据我国《刑法》第15 ...
综合以上四点的分析,本文认为人工智能不能也不应当成为刑事责任的主体。现代社会的分工逐渐明确、细化,在各方利益角逐中责任的承担都唯恐避之不及,因而也就会以各种形式来 ...
否定论认为人工智能只能作为法律客体而不能被视为法律主体,从人类中心主义的基本立场出发,刑事责任的核心在于对具备自由意志的主体行为进行非难,而人工智能因缺乏真正的 ...
生成式人工智能快速发展的过程中带来了诸如法益侵害、冲击传统刑事责任主体理论、刑事责任认定困难等刑法困境,从犯罪论和刑罚论的角度进行分析,应当否定生成式人工 ...
另一方面,赋予强人工智能体刑事责任主体地位,可以通过规制人类的行为实现刑法的任务和机能。强人工智能体具备刑事责任主体地位,可以让我们反思,是否伤害强人工智能体的行为 ...
因为责任能力不仅与自由意志因素有关,还与道德情感因素紧密联系。如果AI无法与人类一样具有道德情感,它们就不具有承担刑事责任的能力。因为没有道德情感的人工智能仍然是 ...
与当下弱人工智能时代相比,强人工智能时代最突出的特点是人形机器人拥有自主独立的行为意识,如前文所述,此时针对人形机器人实施的严重危害社会的行为,宜将其作为刑事责任 ...
当出现人工智能导致的损害时,对答责的要求就意味着,应确保存在适当的责任机制[4]。根据雅克布斯的说法,责任仅仅是“一般预防的衍生品”,应把责任理解为“训练法忠诚”以及“获得 ...
... 人工智能的属性,但是仍然应该被认定为是犯罪分子本人实行犯罪行为的一种延伸,应当由人工智能的操控者全权承担刑事责任,这是毫无疑问的。 而强人工智能则与弱人工智能 ...
对于特征明显的感觉对象,可通过一种或多种感觉器官实现认知;对于特征不明显或复杂的被感觉对象,则需要通过更长时间更多频次的进行感知。这在神经科学领域已经被证明,属于对 ...
吴汉东教授认为,人工智能是关于模拟,延伸和扩展人的智能的应用系统,它模拟并展示人类的智慧能力,能够像人类一样记忆,认知,识别,选择,使人与机器人之间的虚拟交互等同于人与 ...
当前研究普遍认为,人工智能通过语义网络分析、情境模拟与算法诱导实现基础情感识别功能。机器可基于用户输入生成情感反馈(如Replika的“情感算法”通过认知镜像投射深化移情[ ...
近年来人工智能技术呈加速演进态势,明确其法律主体定位成为破解复杂人工智能法律议题的关键所在.本文探讨了人工智能法律主体地位的理论基础,现实需求,核心困境及出路.
根据以上内容,笔者认为人工智能体尚且不能够作为刑法中具有刑事责任的主体。由于否定了人工智能体的刑事主体地位,对于人工智能的过失风险将从人工智能体的研发者 ...
若人工智能如果作为主体,实施行为相对应的对象将是人类或者其他人工智能。这不仅可能会造成现有法律体系的崩溃,同时更可能达不到立法的目的。例如,刑法立法是为了 ...
除此之外,人工智能机器人在处理民事法律关系时享有有限的权利。在刑事责任的承担方面,人工智能需要被纳入一个系统的保险或者信托制度,用以保障受害者的权利和赔偿其损失。
... 弱人工智能理论,人工智能只是人类生产生活的工具,由此导致的行为后果应由使用或操控它的人来承担相应的刑事或民事责任[9] 。第二种观点是自然–可能–结果的责任方式 ...
本文通过分析自动驾驶肇事刑事责任的认定困境,从生产者、使用者和第三方等维度解析刑事责任划分,并提出“动态归责理论”,即责任权重并非固定不变,应根据自动驾驶层级、事故 ...
而本文着重探讨了神经科学在刑事司法系统两个关键领域中的应用情况和未来的发展方向,包括刑事责任能力判定和服刑人员的风险评估与矫正。神经法学的研究不仅加深了我们对于 ...
Artificial intelligence (AI) has significantly transformed the technological environment in which minors interact with digital platforms, information systems and social networks. While these technological developments offer significant opportunities in education, communication and digital participation, they also generate new risks related to juvenile behavior and cyber-related criminal activities. This paper examines the role of artificial intelligence in shaping the criminal behavior of minors by analysing technological, psychological and legal factors that influence their involvement in unlawful activities within digital environments. Through a qualitative and doctrinal legal analysis, the study evaluates the Albanian legal framework in comparison with relevant European and international standards, including the European Union Artificial Intelligence Act and the Convention on the Rights of the Child. The research highlights several normative gaps in the current legal framework, particularly concerning crimes facilitated by artificial intelligence and the determination of criminal liability when minors interact with autonomous technological systems. The study concludes that stronger legal regulation, institutional cooperation and digital education policies are necessary to ensure effective protection of minors and to prevent deviant behaviour in the digital age. Received: 17 January 2026 / Revised: 24 February 2026 / Accepted: 7 March 2026 / Published: 25 March 2026
The development of conversational artificial intelligence (AI) has not only brought about technological innovations but has also given rise to legal issues. The phenomenon of AI-induced suicide highlights the multifaceted legislative demands within the criminal domain for AI. In-depth research into the issues of suitability concerning suicide victims, AI, and regulatory entities becomes particularly necessary. Through literature analysis and comparative legal analysis, this article aims to provide theoretical support for the legal delineation of liability in the context of AI incitement to suicide. Specifically, this article conducts a thorough investigation and comprehensive analysis of relevant legal literature both in China and internationally. The objective is to clarify the legal positions and real challenges surrounding the issue of AI incitement to suicide. Consequently, this article explores whether AI should be considered a legal subject and how, in different contexts, suicide victims and AI regulatory entities should share corresponding responsibilities. As for the findings, AI should not be regarded as an independent legal subject. Based on the theories of victim self-entrapment risk and omission in criminal law, in various situations, suicide victims or AI regulatory entities should bear corresponding responsibilities for the events of incitement to suicide. By delving into the legal liability issues of AI in incitement to suicide, this article provides a theoretical basis for comprehensive AI legislation in the future, demonstrating theoretical innovation. Furthermore, the exploration of criminal legal regulation contributes to the construction of a more comprehensive and rational legal framework for AI.
Emerging Therapies for Antisocial Personality Disorder: Psychotherapeutic and Technological Advances
Antisocial Personality Disorder (ASPD) is a severe mental health condition characterized by persistent antisocial behaviors, lack of empathy, and high societal costs due to criminality and poor treatment engagement. Traditional therapies like Cognitive Behavioral Therapy (CBT) often fail due to low patient motivation and systemic barriers. This paper, through a comprehensive literature review, explores emerging psychotherapeutic (e.g., Schema Therapy and Mentalization-Based Therapy) and technological interventions (e.g., Virtual Reality, AI-driven tools) for ASPD. The review highlights how these innovations target core deficits such as emotional dysregulation and impaired social cognition, offering scalable and engaging alternatives to conventional methods. Key findings suggest that VR-based perspective-taking improves empathy, while AI predictive modeling enhances relapse prevention. The study underscores the need for interdisciplinary collaboration to optimize these therapies, advocating for hybrid models integrating neuroscience, psychology, and technology to address ASPDs complexity.
The process of advancement of artificial intelligence (hereinafter: AI) seen in recent years results in breakthroughs in many areas of human life, such as medicine, agriculture, and science. However, despite its many advantages and benefits, it also inevitably creates room for abuse. This is particularly true of child pornography, where AI systems are increasingly being used to generate nude images of minors — both existing in reality and fictional. These cases, given much attention in the public debate, cause doubts as to whether existing normative solutions are suitable to combat this new phenomenon. The article aims to answer the question of whether the existing law makes it possible to successfully prosecute perpetrators who use AI systems to generate child pornography (including that depicting fictional charac ters). In addition, it also offers an analysis of the criminal liability of the developers of the AI systems used in this process and of the hosting providers managing the websites used to distribute such pornographic content.
The classification of artificial intelligence (AI) systems is a multifaceted process, encompassing a wide range of criteria. Among these classifications, the categorisation based on capabilities, autonomous movement capacity, and cognitive capacity holds particular significance in the context of discussions concerning the recognition of personality in AI. Systems that embody the characteristics of ‘Artificial General Intelligence (AGI)’ and ‘Artificial Superintelligence (ASI)’ in the classification based on capabilities, ‘Fully Autonomous AI (FAA)’ in the classification based on autonomous movement capacity, and ‘Self-Aware Systems (SAS)’ in the classification based on cognitive capacity should be recognised as Integrated Personality (InPer). The AI system that has been granted InPer will be designated InPerAI. It is important to note that InPerAI is not an independent personality, but must be integrated into a ‘Main Person (MaPer)’, which is a natural person or legal entity. InPerAI may be authorised by MaPer to perform certain tasks and operations. Based on this authorisation, the provisions regarding direct representation authority will apply to the transactions made by the InPerAI. Consequently, the rights and obligations acquired by InPerAI shall belong to MaPer. In terms of InPerAI’s tort liability, it is argued that an objective duty of care, akin to the 'liability of owners of dangerous animals', should be established. Furthermore, it is contended that MaPer should be able to exonerate itself from liability by demonstrating that it has taken every precaution or that the damage is attributable to the actions of other parties. In addition, it is posited that criminal liability for offences committed by InPerAI should also be attributed to MaPer. However, MaPer should be fully or partially absolved of criminal liability if he/she/it can demonstrate that, despite having taken every precaution, it could not prevent the commission of the offence, or that the offence was caused by the production or responsibility of another person. In the event that the problem is attributed to production, the manufacturer should also be held criminally liable.
The article examines the ethical and legal aspects of the use of artificial intelligence (AI) in psychological practice, including the risks of dehumanization of therapeutic relationships, data privacy issues, and legislative gaps. The purpose of the study is to conduct a comprehensive analysis of the legislative and ethical issues related to the application of artificial intelligence in psychological practice, including the operation of psychological assistance services, and to propose measures for harmonizing regulation and minimizing risks. Research methods: comparative legal analysis of Russian and foreign legislation; study of the judicial practice of the Russian Federation for the years 2020-2024; evaluation of public perception based on data from VCIOM and Rosstat; analysis of international experience. Research results and conclusions. Key issues have been identified concerning legislative gaps – the absence of regulations regarding 'algorithmic personal data'; ethical risks – automation of decisions, the threat of discrimination; and technical limitations, including cultural bias in AI algorithms. Amendments to Federal Law No. 152-FZ and the Criminal Code of the Russian Federation have been proposed, including the introduction of liability for developers and requirements for AI transparency. Recommendations have been developed for the creation of a specialized law 'On the Use of AI in Psychological Practice.' To minimize risks associated with ethical issues in psychological aid services, it is necessary to adopt amendments to the legislation; develop GOST R 'AI in the Work of Psychological Aid Services'; and establish interdisciplinary working groups aimed at comprehensive analysis of algorithmic bias, development of culturally adapted models, and monitoring compliance with ethical standards.
The most informative electroencephalographic (EEG) features related to basic Executive Functions (EFs) were identified by exploiting the SHapley Additive exPlanations (SHAP), an eXplainable Artificial Intelligence (XAI) method. In particular, Working Memory (WM) and Inhibitory Control (IC) were tested by exposing participants to N-Back and Go-NoGo tasks. Currently, eXplainable AI (XAI) methods are scarcely investigated in the framework of EEG signal processing. Specific EEG features of EFs activation are still poorly explored. EEG data from 13 healthy participants were gathered during rest condition and cognitive task executions. In particular, N-Back (activating WM) and Go-NoGo (activating IC) tasks were administered with two difficulty levels. SHAP identified Absolute Delta Power in Fp1 and Fp2 as the two most significant features for both WM and IC, when the two experiments were analyzed separately. The same two features emerged as the most significative when SHAP was applied on a classification problem involving the two tasks at the highest difficulty level. Absolute power of Delta Fp1 and Fp2 resulted statistically significantly lower in the Go-NoGo task compared to the N-Back. These results are consistent with a condition of suppression of irrelevant sensory stimuli common to the two tasks and particularly to the more challenging N-back task. The research contributes to the development of solutions for the cognitive rehabilitation of patients suffering from neurodegenerative diseases.
ABSTRACT Based on the “brakes” model of self-control, this study sought to analyze if psychopathy traits are related to Artificial Intelligence use, and whether low levels of self-control mediate this hypothetical relation. A cross-sectional study involving 1687 undergraduate students was conducted. Psychopathy traits predicted more use of artificial intelligence, as well as using these tools to do academic work instead of the student and to create fake content. However, self-control just mediated the link between psychopathy traits and the frequency of Artificial Intelligence use, suggesting that the misuse of Artificial Intelligence tools might be planned rather than impulsive behavior.
The paper examines the influence technics on human conscience, the dialectic of interaction between artificial intelligence (AI) and human intelligence within the system: information technologies –mentality and psychology of society – control over people’s minds and technologies. The emphasis is on the essence of the concept of mentality, mental identity, the role of informal institutions, and the speed of changes in biological and information time. The author represents a conceptual view on the problem of the role of artificial intelligence in the development of mental management as a humanitarian problem in the logic of the development of ideas and approaches previously considered by the author in the following works: The Future of Russia: transition to a new formation; Strategy of Reforms in Russia: from leader to leader; and in the context of the idea of social justice and economic growth, discussed at the First Russian Economic Forum.
This article examines the problem of the specificity and functions of mental control from two main perspectives: (1) from the standpoint of natural scientific explanation; (2) within a socio-psychological and socio-humanitarian context. The first approach employs an information-based framework to address the question of how phenomena of subjective reality can serve as causes of physical changes. The distinction between informational and physical causality is elucidated, providing a justification for psychic (mental) causality as a form of informational causality. Within this context, the article discusses issues of information encoding and decoding, recent advances in neuroscience regarding the deciphering of brain codes for mental phenomena, and pertinent results from genomics. The author explores the significance of these developments for substantiating free will and self-determination processes, as well as for developing new artificial intelligence systems capable of emulating natural intelligence functions. Building upon these foundations, the second aspect of the discussion examines the role of mental control in interpersonal and mass communications, as well as in the functioning of institutional entities. To this end, the article investigates the relationships between individual consciousness and mass, institutional consciousness, the importance of national leadership in extreme situations, the phenomenon of polysubjectivity, and related questions concerning the optimal balance between centralization and autonomization of control mechanisms. The author demonstrates the inevitability of overcoming the principle of monopolarity, which inhibits autonomization functions and thereby undermines the foundations of global social self-organization. Finally, the article briefly addresses a number of salient issues pertaining to the meaning, objectives, and outcomes of mental control acts, as well as their socio-humanitarian implications and evaluation.
Today, a number of researchers representing both technical knowledge and the humanities believe that it is necessary to endow Artificial Intelligence with subjective “human” qualities, which include the ability to self-aware, as well as to make a free choice. In this regard, the problem of the AI autonomy becomes extremely relevant, and further – AI creator’s rights and capabilities (or ineligibility) to hold control over AI. Within this framework the Artificial Personality project has been developing over the past 20 years. Given its active scientific and social activities with the involvement of the remarkable interdisciplinary community, the project is far from complete. The presented article summarizes the executed research for Artificial Personality conceptualization and demonstrates that today the fundamental possibility of the creation of Artificial Personality has not yet been convincingly proven. Also, conceptually, there has not been formulated the single generally accepted approach to promising methods and technology for the implementation and the embodiment of the Artificial Personality. So, at the current stage, the study of the Artificial Personality is rather abstract theoretical research. As a result of the study, the authors come to the conclusion that today it is reasonable to use the results of the Natural Personality and Natural Intelligence studies and transfer the methods that have shown their relative effectiveness in various existing manifestations of real social life to the field of creating the concept of Artificial Personality. The proposed approach for the conceptualization of Artificial Personality will help to create a theoretical and methodological foundation for theoretical research and further implementation of Artificial Personality projects.
该组论文全方位探讨了人工智能与罪犯认知之间的多维关系。研究领域从底层的认知神经科学(如EEG特征识别与反社会人格矫治)延伸至中层的行为心理学(如AI对个体犯罪动机的诱导与影响),并最终汇聚于宏观的法学理论,包括关于AI刑事主体地位的法理争议,以及在弱人工智能和强人工智能不同演进阶段下,针对具体技术风险(如自动驾驶、生成式内容)的刑事责任划分与监管策略。整体呈现出技术、心理与法学跨学科深度交织的特征。