基于可解释机器学习的财务重述风险预测
可解释人工智能在财务审计与合规中的理论框架与专业影响
这组文献侧重于探讨XAI在财务领域的宏观应用背景,包括其对会计专业判断的影响、提升用户信任的机制以及在监管合规中的重要性。
- Evaluating the Impact of Explainable AI on User Trust in Financial Decision-Support Systems(Ramya Mandava, S. Vellela, Shobana Gorintla, Lavanya Dalavai, Nallapu Malathi, Koya Haritha, 2025, 2025 International Conference on Computational Robotics, Testing and Engineering Evaluation (ICCRTEE))
- The Role of Explainable AI in Enhancing Trust and Decision-Making in Financial Services(Ian Staley, 2025, Journal of Applied Finance & Banking)
- What accountants need to know about artificial intelligence and machine learning: a review and call for future research(Stewart Jones, Clinton Free, 2026, Journal of Accounting Literature)
- Rethinking explainable AI in financial services(Rita Pimentel, G. Pisoni, 2025, AI & SOCIETY)
- Reimagining Compliance: Explainable AI Models for Financial Regulatory Audits(Hrishikesh Desai, 2025, SSRN Electronic Journal)
财务欺诈检测中可解释性技术的比较研究与性能评估
该组文献对SHAP、LIME、反事实说明等多种可解释性技术进行了对比分析,并探讨了模型准确性与可解释性之间的权衡,特别是在持续审计环境下的应用。
- A Comparative Study of Explainable Artificial Intelligence (Xai) Techniques in Financial Auditing Applications(Venkatasubramanian Ganapathy, 2025, Edumania-An International Multidisciplinary Journal)
- Interpretable machine learning models for financial fraud detection using explainable AI(Kawalpreet Kaur, Rashmi Chaudhary, 2025, AIP Conference Proceedings)
- CONTINUOUS AUDITING AND EXPLAINABLE AI FOR ENHANCING REAL TIME FINANCIAL ANALYSIS(Sheva Rani Wibowo, A. Wibowo, 2025, Jurnal Akuntansi dan Bisnis)
- Model interpretability of financial fraud detection by group SHAP(Kang Lin, Yuzhuo Gao, 2022, Expert Syst. Appl.)
- Explainable AI For Fraud Detection in Financial Transactions(Bhanu Duggal, 2025, INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT)
基于集成学习与特征优化的重述及欺诈风险预测模型
这组文献关注具体的模型构建方法,如使用XGBoost、LightGBM、Stacking集成模型等,并结合SHAP等工具进行特征重要性识别,以提高财务报表舞弊和风险预测的精度。
- Financial Statement Fraud Detection Through an Integrated Machine Learning and Explainable AI Framework(Tsolmon Sodnomdavaa, Gunjargal Lkhagvadorj, 2025, Journal of Risk and Financial Management)
- Explainable AI (XAI) Transparency for Financial Fraud Detection(Md. Golam Kibriya, Nasrullah Masud, Abdullah Al Bassam, Md Tanvir Islam Sourav, Abdus Sobhan, 2025, 2025 International Conference on Quantum Photonics, Artificial Intelligence, and Networking (QPAIN))
- An Explainable AI-based Fraud Detection System Using Recursive Feature Elimination and Waterwheel Plant Optimization for Financial Transactions(H. Hajiyev, Emil Hajiyev, Mirzobek Avezov, S. Makhmudov, Dilora Abdukhalikova, E. L. Lydia, 2025, Engineering, Technology & Applied Science Research)
- Financial Fraud Detection Using Explainable AI and Stacking Ensemble Methods(F. Almalki, Mehedi Masud, 2025, ArXiv)
- SHAP-Instance Weighted and Anchor Explainable AI: Enhancing XGBoost for Financial Fraud Detection(Putthiporn Thanathamathee, Siriporn Sawangarreerak, Siripinyo Chantamunee, Dinna@Ninna Mohd Nizam, 2024, Emerging Science Journal)
- Explainable AI (XAI) Using SHAP and LIME for Financial Fraud Detection and Credit Scoring(Sophia John Chavakula, Christopher Aseer J Albert, Earnest Ebenezer, Mustansir Habil Bhagat, C. Mahamuni, 2025, 2025 International Conference on Advanced Computing Technologies (ICoACT))
兼顾隐私保护与透明度的联邦学习财务预测方案
这组文献针对金融数据的敏感性,研究了如何将联邦学习(Federated Learning)与可解释人工智能(XAI)相结合,在保护隐私的前提下实现跨机构的协作欺诈检测。
- Secure and Transparent Banking: Explainable AI-Driven Federated Learning Model for Financial Fraud Detection(Saif Khalifa Aljunaid, S. Almheiri, Hussain Dawood, Muhammad Adnan Khan, 2025, Journal of Risk and Financial Management)
- An Explainable and Privacy-Preserved Machine Learning Framework for Financial Fraud Detection(A. Patan, 2025, International Journal for Research in Applied Science and Engineering Technology)
- Federated Learning and Explainable AI for Decentralized Fraud Detection in Financial Systems(Bhasker Reddy Ande, 2025, Journal of Information Systems Engineering and Management)
- Financial Fraud Detection Using Explainable AI and Federated Learning(Mrs. D. Aswani, 2025, International Journal for Research in Applied Science and Engineering Technology)
结合非结构化数据与多维指标的特定领域风险评估
该组文献扩展了预测数据的来源,包括利用10-K年度报告的文本数据、环境披露信息、医疗账单数据等,通过深度学习和自然语言处理技术进行更全面的财务与运营风险评估。
- Explainable AI for Comprehensive Risk Assessment for Financial Reports: A Lightweight Hierarchical Transformer Network Approach(X. Tan, Stanley Kok, 2025, ArXiv)
- Machine learning detection of manipulative environmental disclosures in corporate reports(Yuanzhe Li, Junyuan Li, Yutong Zheng, Gangbing Zheng, Chen‐Hui Wu, 2025, Scientific Reports)
- AI-Driven Machine Learning for Fraud Detection and Risk Management in U.S. Healthcare Billing and Insurance(Raktima Dey, Ashutosh Roy, Jasmin Akter, Aashish Mishra, Malay Sarkar, 2025, Journal of Computer Science and Technology Studies)
- Enhancing Financial Risk Assessment through Explainable AI: A SHAP-Based Approach for Transparent Decision-Making(K. S. Srivalli, D. Sumanthi, 2025, SSRN Electronic Journal)
本组论文全面探讨了基于可解释机器学习的财务风险预测体系。研究内容从理论框架(强调信任与合规)、方法论比较(SHAP/LIME等技术评估)、模型构建(集成学习与特征优化)、隐私保护(联邦学习结合XAI)到多源数据融合(文本与多维指标应用)五个维度展开。整体趋势表明,研究正从单纯追求预测准确率向构建透明、可信、符合监管要求且能处理复杂非结构化数据的智能财务监测系统转型。
总计24篇相关文献
Detecting manipulative environmental disclosures remains a critical yet unresolved challenge for regulators and investors. This study proposes a machine learning framework that integrates financial indicators, textual sentiment, and public attention data to identify potential manipulation among Chinese listed firms. A Random Forest model is trained using multi-source features derived from corporate reports and Baidu Index trends. The optimized model demonstrates strong discriminatory ability under severe class imbalance (ROC-AUC = 0.94, PR-AUC = 0.78, Balanced Accuracy = 0.86, MCC = 0.72), indicating robust and reliable performance across both majority and minority classes. Evaluation through balanced metrics further confirms the model’s genuine predictive capacity rather than overfitting to training data. SHAP-based interpretation reveals that financial pressure, abnormal public attention, and sentiment deviation are the primary determinants of manipulation risk. Overall, the framework highlights how interpretable machine learning can strengthen data-driven environmental supervision. The findings are context-specific to the Chinese market due to reliance on Baidu-based indicators, warranting validation in other regulatory contexts in future research.
Financial statement fraud remains a substantial risk in environments marked by weak regulatory oversight and information asymmetry. This study develops a decision-centric framework that integrates machine learning, explainable artificial intelligence, and decision curve analysis to improve fraud detection under severe class imbalance. Using 969 firm-year observations from 132 Mongolian firms (2013–2024), we evaluate 21 financial ratios with models including Random Forest, XGBoost, LightGBM, MLP, TabNet, and a Stacking Ensemble trained with SMOTE and class-weighted learning. Performance was assessed using PR-AUC, F1-score, Recall, and DeLong-based significance testing. The Stacking Ensemble achieved the strongest results (PR-AUC = 0.93; F1 = 0.83), outperforming both classical and modern baseline models. Interpretability analyses (SHAP, LIME, and counterfactual explanations) consistently identified leverage, profitability, and liquidity indicators as dominant drivers of fraud risk, supported by a SHAP Stability Index of 0.87. Decision curve analysis showed that calibrated thresholds improved decision efficiency by 7–9% and reduced over-audit costs by 3–4%, while an audit cost simulation estimated annual savings of 80–100 million MNT. Overall, the proposed ML–XAI–DCA framework offers a transparent, interpretable, and cost-efficient approach for enhancing fraud detection in emerging-market contexts with limited textual disclosures.
Banking and online financial service providers face significant challenges due to financial fraud. Traditional frauddetection methods are often inadequate because of imbalanced datasets, limited interpretability, and privacy concerns involving confidential customer information. This paper presents an explainable AI–based system for financial fraud detection designed to address these issues. The system employs the Light Gradient Boosting Machine (LightGBM) as the primary model, combined with SMOTE oversampling to mitigate class imbalance. Privacy is maintained by anonymizing sensitive features, including Personally Identifiable Information (PII), by temporarily adding and later removing attributes such as name_email_similarity before model training. Model transparency is achieved through SHAP (Shapley Additive Explanations), which offers featurelevel interpretability for fraud predictions. The system is implemented as a web-based interactive dashboard using the Flask framework, enabling users to upload datasets, perform fraud detection, adjust detection sensitivity (via threshold tuning), and download a detailed fraud report. When evaluated on a real-world dataset, the system achieved an overall accuracy of 98.5%, an ROC-AUC of 0.89, improved privacy preservation, and enhanced interpretability through SHAP. The proposed solution provides a practical end-to-end framework that balances accuracy, transparency, and privacy protection, making it suitable for banking and fintech fraud-detection applications.
Healthcare fraud in the United States results in billions of dollars in financial losses annually, necessitating advanced technological solutions for fraud detection and risk management. Machine learning (ML) has emerged as a powerful tool in identifying fraudulent claims, mitigating risks, and enhancing financial security in healthcare billing and insurance (Anderson & Kim, 2023). This study examines the application of supervised and unsupervised ML techniques, such as decision trees, neural networks, and anomaly detection models, to detect fraudulent patterns in insurance claims (Wang et al., 2022). By analyzing large-scale electronic health records (EHRs) and claims datasets, ML algorithms can identify suspicious activities and reduce false positives, improving fraud detection accuracy (Garcia & Lee, 2023). Additionally, predictive analytics aids in risk assessment, enabling insurers and healthcare providers to proactively manage financial fraud risks (Brown et al., 2023). Despite its advantages, ML-based fraud detection systems face challenges, including data privacy concerns, interpretability issues, and regulatory compliance (Nguyen & Patel, 2023). This research highlights the effectiveness of AI-driven fraud detection models in minimizing financial losses and enhancing operational efficiency in the U.S. healthcare sector, with future implications for explainable AI and privacy-preserving ML solutions.
The increasing sophistication of fraud has rendered rule-based fraud detection obsolete, exposing banks to greater financial risk, reputational damage, and regulatory penalties. Financial stability, customer trust, and compliance are increasingly threatened as centralized Artificial Intelligence (AI) models fail to adapt, leading to inefficiencies, false positives, and undetected detection. These limitations necessitate advanced AI solutions for banks to adapt properly to emerging fraud patterns. While AI enhances fraud detection, its black-box nature limits transparency, making it difficult for analysts to trust, validate, and refine decisions, posing challenges for compliance, fraud explanation, and adversarial defense. Effective fraud detection requires models that balance high accuracy and adaptability to emerging fraud patterns. Federated Learning (FL) enables distributed training for fraud detection while preserving data privacy and ensuring legal compliance. However, traditional FL approaches operate as black-box systems, limiting the analysts to trust, verify, or even improve the decisions made by AI in fraud detection. Explainable AI (XAI) enhances fraud analysis by improving interpretability, fostering trust, refining classifications, and ensuring compliance. The integration of XAI and FL forms a privacy-preserving and explainable model that enhances security and decision-making. This research proposes an Explainable FL (XFL) model for financial fraud detection, addressing both FL’s security and XAI’s interpretability. With the help of Shapley Additive Explanations (SHAP) and LIME, analysts can explain and improve fraud classification while maintaining privacy, accuracy, and compliance. The proposed model is trained on a financial fraud detection dataset, and the results highlight the efficiency of detection and successful elimination of false positives and contribute to the improvement of the existing models as the proposed model attained 99.95% accuracy and a miss rate of 0.05%, paving the way for a more effective and comprehensive AI-based system to detect potential fraudulence in banking.
Traditional machine learning models often prioritize predictive accuracy, often at the expense of model transparency and interpretability. The lack of transparency makes it difficult for organizations to comply with regulatory requirements and gain stakeholders trust. In this research, we propose a fraud detection framework that combines a stacking ensemble of well-known gradient boosting models: XGBoost, LightGBM, and CatBoost. In addition, explainable artificial intelligence (XAI) techniques are used to enhance the transparency and interpretability of the model's decisions. We used SHAP (SHapley Additive Explanations) for feature selection to identify the most important features. Further efforts were made to explain the model's predictions using Local Interpretable Model-Agnostic Explanation (LIME), Partial Dependence Plots (PDP), and Permutation Feature Importance (PFI). The IEEE-CIS Fraud Detection dataset, which includes more than 590,000 real transaction records, was used to evaluate the proposed model. The model achieved a high performance with an accuracy of 99% and an AUC-ROC score of 0.99, outperforming several recent related approaches. These results indicate that combining high prediction accuracy with transparent interpretability is possible and could lead to a more ethical and trustworthy solution in financial fraud detection.
Explainable Artificial Intelligence (XAI) when applied to financial decision-support systems (FDSS) creates transparent environments which help users improve their trust and develop better decision outcomes. The research examines the influence of XAI on finance user trust by evaluating its model interpretability alongside transparency and ethical compliance. The paper explains how three XAI mechanisms such as SHAP values, LIME, and counterfactual explanations help improve user confidence and interaction. Users demonstrate more satisfaction with trust in FDSS systems integrated with XAI compared to black-box AI systems and these systems produce improved financial decisions. Human-Oriented XAI design serves as a key financial success factor because it guarantees the achievement of ethical behavior and operational success as well as enhanced adoption rates.
The dynamic changing nature of fraud patterns and necessity to safeguard sensitive customer data make it difficult for financial institutions to detect fraudulent activities. We propose a novel approach to decentralized financial system fraud detection by merging Federated Learning (FL) with Explainable AI (XAI). Since no two financial institutions share raw data, the proposed system is able to train a unified, effective fraud detection model without compromising data privacy or regulatory norms due to FL. We have integrated Explainable AI (XAI) techniques to make the model transparent for stakeholders so they can interpret the result and trust the decision process of the system. Compared to traditional centralized methods, the proposed approach can achieve better detection accuracy with less false positives and better interpretability based on experimental results. We believe our results would lead to a fruitful way to adopt FL along with XAI mechanisms which will ensure insight provision for fraud detection without breaking down the privacy and secrecy of the underlying data and improving the overall transparency and accountability of financial system.
Financial fraud detection and credit scoring are two important applications in the financial domain on which high accuracy and interpretability are required. While Random Forests and XGBoost algorithms of the third generation produce good prediction quality, there is no way to explain the prediction results and they do not meet transparency criteria of regulators. The work in this paper focuses on the incorporation of Explainable AI (XAI) methods such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) to combat these issues of concern under trust, compliance and accountability within these models. Decision Trees were employed for the detection of fraud while Random Forest was employed for credit scoring, SHAP was used for global feature importance and LIME for instance explanations. The model for fraud detection had accuracy of 95% and SHAP found the characteristics of transactions such as amount and frequency significant for fraud detection, While, the credit scoring model that had 76% accuracy and with the help of LIME, the debt ratios and payment history of the credit contenders was found important. The integration of SHAP and LIME improves the model interpretability and fairness, and makes the stakeholders and regulating authorities to trust AI solutions that will be used in the future of financial companies.
The use of AI in the finance sector is rapidly becoming essential to its key operations, including risk management, fraud detection, and investment analysis. This study examined the application of Explainable AI (XAI) to enhance transparency, trust, and informed decision-making in the financial sector. The research employed a mixed-methods approach, as it was appropriate given the quantitative data collected through the Likert survey and the qualitative data collected through academic literature, case studies, and regulatory documents. Quantitative data were analyzed using JASP (independent t-tests, correlation analysis, and regression analysis) and JAMOVI (Exploratory Factor Analysis). The qualitative data were analyzed through taguette in order to determine the themes. The findings of this study indicated that XAI was viewed as a significant tool in the decision-making process, and the level of trust in the finance sector increased. Transparency advances the quality and level of decisions made by finance professionals, which subsequently boosts the trust and quality of the AI systems. The qualitative analysis revealed the themes of the role of XAI in fostering trust, the importance of transparency in enhancing interpretability, and the constraints to XAI application, including the trade-off between complexity and explainability. Keywords: Explainable Artificial Intelligence (XAI), Interpretability, Regulatory compliance, Transparency, Trust in AI systems, XAI Principles, XAI Techniques.
Abstract—Explainable AI (XAI) improves machine learning models’ interpretability, especially for detecting financial fraud. Financial fraud is a growing threat, with criminals using increas- ingly sophisticated methods to circumvent standard security mea- sures. This research article investigates various XAI strategies for increasing transparency and confidence in fraud detection algorithms. The study examines the efficacy of SHAP (Shap- ley Additive Explanations), LIME (Local Interpretable Model- agnostic Explanations), and attention mechanisms in providing insight into model predictions. We examine the existing obstacles of using XAI in fraud detection systems and provide approaches to improve both interpretability and prediction performance. This study helps to develop more transparent and trustworthy AI-driven fraud detection tools, hence facilitating regulatory com- pliance and improving decision-making in financial institutions. Index Terms—XAI, SHAP, LIME, fraud detection, financial transactions, interpretability
Financial fraud is a growing concern that threatens the integrity of financial institutions and customer trust. Traditional fraud detection methods, which rely on rule-based systems and centralized machine learning models, often struggle to keep up with evolving fraudulent tactics. Additionally, the black-box nature of many machine learning models limits their interpretability, making it difficult for financial analysts and regulatory bodies to trust and validate fraud detection outcomes. To address these challenges, Explainable AI (XAI) enhances model transparency by providing human-understandable explanations for fraud predictions, while Federated Learning (FL) enables privacy-preserving, collaborative model training across multiple institutions without sharing sensitive data. Federated Learning offers a decentralized approach that allows financial institutions to train fraud detection models on diverse, distributed datasets while ensuring compliance with data protection regulations. This improves model generalization and robustness by leveraging insights from various sources without compromising customer privacy. At the same time, XAI ensures that these models remain interpretable, helping analysts understand the reasoning behind fraud alerts, identify potential biases, and refine detection strategies accordingly. The combination of XAI and FL enables institutions to strengthen fraud detection capabilities while adhering to ethical AI practices and regulatory requirements. The integration of Explainable AI and Federated Learning in financial fraud detection offers a promising solution to the challenges of transparency and privacy. XAI improves the interpretability of fraud detection models, making them more accountable and understandable for stakeholders, while FL facilitates secure and efficient model training across different organizations. This paper explores the synergy between these technologies, discussing their advantages, challenges, and potential applications in enhancing fraud detection. The Combination of Federated Learning (FL) and Explainable AI (XAI) delivers a powerful solution for financial fraud detection offering strong privacy guarantees, improved model performance and enhanced transparency. This approach supports regulatory compliance and fosters confidence among stakeholders in the deployment of AI driven fraud prevention.
With machine learning pipelines becoming significant in the flagging of abnormal card payment patterns, financial institutions are facing the challenge of improved regulatory compliance and end-user trust due to ambiguous “black-box” decisions. The study looks at modern Explainable AI (XAI) equipment that could make fraud-detection models transparent without sacrificing accuracy using the publicly available Credit Card Fraud Detection Dataset 2023 [30]. The first stage is an unsupervised anomaly detector (deep-autoencoder) that enables the full unlabeled stream of suspicious transactions. Itemss to be filtered, it dramatically reduces the amount presented to analysts. The filtered data by a four representative supervised learners are classified as follows: Logistic Regression, Decision Tree, Random Forest, and XGBoost with an interpretable glass-box baseline (Explainable Boosting Machine, EBM) and an LSTM for sequence-aware deep learning. We generated global and local SHAP and LIME explanations of the model, whereas EBM intrinsically exposes additive feature curves. Explanations were compared and benchmarked on fidelity, stability, and cognitive load within a simulated analyst dashboard environment. XGBoost proved to be the best performer in detection by all means (AUC almost 0.995; $\mathbf{F 1} \approx \mathbf{0. 9 1}$ criterion) but required post-hoc SHAP elucidation on non-linear interaction effects of transactional amount and PCA variables from the model perspective. Even the EBM lagged slightly behind for accuracy (AUC $\approx 0.965$) but endowed transparency for effortless auditing of feature effects and interaction terms on the spot. Explanation scores led to a 38% increase in investigation speed in the user study, whilst ranked SHAP feature attributions caused a 12% decrease in false-positive escalations. Coupling unsupervised screening with supervised classifiers enhanced by model-agnostic XAI provides both high recall and actionable insights to satisfy regulatory “right-to-explanation” demands. In conclusion, provided is a realistic template for production environments that will be equipped with trustworthy fraud-monitoring systems, real-time principles, and guidelines.
Every publicly traded U.S. company files an annual 10-K report containing critical insights into financial health and risk. We propose Tiny eXplainable Risk Assessor (TinyXRA), a lightweight and explainable transformer-based model that automatically assesses company risk from these reports. Unlike prior work that relies solely on the standard deviation of excess returns (adjusted for the Fama-French model), which indiscriminately penalizes both upside and downside risk, TinyXRA incorporates skewness, kurtosis, and the Sortino ratio for more comprehensive risk assessment. We leverage TinyBERT as our encoder to efficiently process lengthy financial documents, coupled with a novel dynamic, attention-based word cloud mechanism that provides intuitive risk visualization while filtering irrelevant terms. This lightweight design ensures scalable deployment across diverse computing environments with real-time processing capabilities for thousands of financial documents which is essential for production systems with constrained computational resources. We employ triplet loss for risk quartile classification, improving over pairwise loss approaches in existing literature by capturing both the direction and magnitude of risk differences. Our TinyXRA achieves state-of-the-art predictive accuracy across seven test years on a dataset spanning 2013-2024, while providing transparent and interpretable risk assessments. We conduct comprehensive ablation studies to evaluate our contributions and assess model explanations both quantitatively by systematically removing highly attended words and sentences, and qualitatively by examining explanation coherence. The paper concludes with findings, practical implications, limitations, and future research directions. Our code is available at https://github.com/Chen-XueWen/TinyXRA.
: The growing volume of digital financial transactions has led to an increase in fraudulent activities. Financial institutions and businesses are faced with the daunting task of detecting and preventing fraudulent transactions. Machine learning has emerged as a powerful tool for addressing this challenge, with a wide range of models and techniques available. However, the lack of transparency and interpretability in complex machine learning models has raised concerns in the financial sector. This research paper explores the importance of interpretable machine learning models for financial fraud detection, reviews various techniques and algorithms, and presents a case study to demonstrate their practical application. Financial institutions are under constant threat from sophisticated fraudsters who employ ever-evolving techniques to deceive systems and compromise sensitive information. In this context, machine learning models have become indispensable tools for detecting fraudulent activities in real-time. However, the complexity of many machine learning models can render them difficult to interpret, which is a critical concern in the highly regulated and high-stakes field of finance. This abstract provides an overview of the key aspects surrounding the use of interpretable machine learning models in the realm of financial fraud detection. Interpretable models offer transparency and insights into decision-making processes, essential for maintaining trust, regulatory compliance, and facilitating proactive responses to emerging fraud patterns. This paper first outlines the importance of financial fraud detection, highlighting the significant financial losses and reputational damage that institutions can incur in the absence of effective fraud prevention measures. It then delves into the concept of interpretability in machine learning, explaining why it is essential in financial fraud detection. Interpretability not only aids in understanding model predictions but also assists in model validation, accountability, and regulatory compliance. Next, the paper explores various interpretable machine learning models that have shown promise in the field of financial fraud detection. These models, such as decision trees, logistic regression, and rule-based systems, are discussed in the context of their strengths, weaknesses, and applicability.
Fraudulent transactions and the methods to detect them are an an important issue for financial organizations globally. The requirement for progressive fraud detection systems to protect properties and maintain customer trust is predominant for financial organizations, but particular factors make the development of efficient and effective fraud detection models a challenge. Deep Learning (DL) has greatly improved fraud detection accuracy by detecting intrinsic patterns, whereas interpretability techniques improve transparency and build trust by making predictions understandable to experts. This study presents a Fraud Detection System using Recursive Feature Elimination and Waterwheel Plant Optimization (FDS-RFEWPO) model for financial transactions. The aim is to perform a comprehensive evaluation of fraud detection in high-dimensional financial transactions using advanced techniques. Initially, the FDS-RFEWPO technique follows min-max-based data pre-processing to normalize the input data. For the feature selection process, the FDS-RFEWPO model employs the Recursive Feature Elimination (RFE) technique to select the most relevant features from the dataset. Furthermore, the Variational Autoencoder/Wasserstein Autoencoder (VAE/WAE) model is employed for fraud detection and classification. To further enhance model performance, the Waterwheel Plant Optimization (WPO) technique is employed for hyperparameter tuning, ensuring the selection of optimal parameters that contribute to improved accuracy. Finally, the Explainable Artificial Intelligence (XAI) technique applies Local Interpretable Model-Agnostic Explanations (LIME) to improve the transparency, interpretability, and trustworthiness of Artificial Intelligence (AI) methods by making their decision-making procedures clear to humans. To evaluate the performance of the FDS-RFEWPO model, a comprehensive experimental analysis is conducted using a financial fraud detection dataset. The comparison study of the FDS-RFEWPO model demonstrates a superior accuracy of 97.41% over existing techniques.
The incorporation of machine learning (ML) into modern financial analysis has made transactions more complex and ondemand, and further increased the scope of ML applications in finance. On the other hand, accounting and auditing processes have yet to adopt machine learning systems due to challenges of precision, interpretability, and integration. This research analyzes the balance between accuracy and explainability in XAI for fraud detection with XGBoost, Transformer-Based Models, and continuous auditing approaches. Key findings suggest that although less preferred, Transformer-Based Models are more accurate in detecting multi-faceted fraud and deliver an AUC-ROC of 95%. XGBoost, with an AUC-ROC score of 92%, surpasses set benchmarks for continuous auditing, achieving high assurance while requiring low operational complexity, and therefore the model with fewer continuous auditing constraints. The results substantiate the premise that claiming compliance with audit requirements evokes low complexity emerges logic steeped in trust faced by agile controllers. The primary claim was the accompanied loss of understanding with defining accuracy and the adoption scrutiny processes of XGBoost. These results emphasize the potential of hybrid AI systems achieved by merging explainability of XGBoost with sequential analysis of Transformers which also tend to be less interpretable. Such models could benefit decision makers significantly.
No abstract available
No abstract available
No abstract available
This research aims to enhance financial fraud detection by integrating SHAP-Instance Weighting and Anchor Explainable AI with XGBoost, addressing challenges of class imbalance and model interpretability. The study extends SHAP values beyond feature importance to instance weighting, assigning higher weights to more influential instances. This focuses model learning on critical samples. It combines this with Anchor Explainable AI to generate interpretable if-then rules explaining model decisions. The approach is applied to a dataset of financial statements from the listed companies on the Stock Exchange of Thailand. The method significantly improves fraud detection performance, achieving perfect recall for fraudulent instances and substantial gains in accuracy while maintaining high precision. It effectively differentiates between non-fraudulent, fraudulent, and grey area cases. The generated rules provide transparent insights into model decisions, offering nuanced guidance for risk management and compliance. This research introduces instance weighting based on SHAP values as a novel concept in financial fraud detection. By simultaneously addressing class imbalance and interpretability, the integrated approach outperforms traditional methods and sets a new standard in the field. It provides a robust, explainable solution that reduces false positives and increases trust in fraud detection models. Doi: 10.28991/ESJ-2024-08-06-016 Full Text: PDF
The integration of Explainable Artificial Intelligence (XAI) in financial auditing marks a transformative advancement in enhancing transparency, accountability, and trust in automated decision-making processes. This comparative study evaluates various XAI techniques—such as SHAP (SHapley Additive exPlanations), LIME (Local Interpretable Model-agnostic Explanations), decision trees, and counterfactual explanations—within the domain of financial auditing. The findings reveal significant differences in interpretability, accuracy, user comprehension, and auditability across these methods, offering valuable insights for auditors, regulators, and AI developers. The impact of this research is twofold. Firstly, it provides a critical framework for selecting suitable XAI models tailored to specific financial auditing tasks—such as fraud detection, anomaly identification, and risk assessment—thereby improving the reliability of AI-augmented audits. Secondly, the study addresses regulatory and ethical imperatives by demonstrating how transparent AI systems can support compliance with financial standards and accountability norms. Ultimately, this research contributes to the broader adoption of trustworthy AI in finance, promoting more informed decision-making and fostering greater confidence among stakeholders, including auditors, clients, and regulatory bodies. It lays the groundwork for future development of hybrid audit systems that balance AI efficiency with human-centric transparency.
No abstract available
This paper provides a comprehensive and conceptually grounded review of how artificial intelligence (AI) and machine learning (ML) are transforming professional judgement in accounting. It clarifies the epistemic foundations of AI and ML, synthesises the expanding accounting literature employing these techniques and provides an agenda for future research. This study reviews AI and ML applications across auditing and assurance, financial reporting, management accounting, taxation, ESG measurement, financial distress and earnings prediction, and public-sector analytics. Applying Abbott's (1988) system-of-professions framework, it connects methodological developments to broader institutional questions about expertise, authority and governance in an AI-enabled accounting environment. Three insights emerge. First, ML models consistently outperform traditional statistical approaches across prediction-intensive accounting domains by capturing nonlinearities, interactions and high-dimensional structures that conventional methods overlook. Second, ML expands the evidentiary boundaries of accounting by incorporating unstructured, textual, behavioural and alternative data, reshaping what counts as relevant and credible evidence. Third, as ML systems increasingly rival or exceed human predictive judgement, particularly in areas such as fraud detection, accounting estimates and going-concern prediction, they challenge the profession's epistemic authority, necessitating new expertise in model interpretation, governance and error evaluation. AI and ML fundamentally reshape the evidentiary basis of accounting, creating new forms of machine-generated knowledge that challenge traditional professional judgement. As predictive models increasingly surpass human experts, research must investigate how authority, responsibility and trust shift within hybrid human–AI decision systems. Future research should examine how algorithmic evidence is validated, governed and integrated into audit and reporting frameworks, and how professional identities, skill sets and jurisdiction evolve as accountants transition from primary judgement-makers to interpreters and overseers of AI-driven inference. AI can enhance audit quality through automated anomaly detection, continuous monitoring and ML-driven risk assessment. Firms can use ML to improve accounting estimates, fraud detection, misstatement prediction and ESG analytics. Management accountants can deploy AI for forecasting, planning and real-time cost optimisation. Regulators and tax authorities can apply ML to detect non-compliance and prioritise audits. Across all settings, accountants increasingly focus on interpreting, validating and governing AI outputs rather than generating predictions themselves. This paper demystifies AI and ML concepts, mapping empirical developments across the field and offering a theoretically grounded account of how AI reshapes professional judgement and epistemic authority. It also identifies opportunities for future research.
本组论文全面探讨了基于可解释机器学习的财务风险预测体系。研究内容从理论框架(强调信任与合规)、方法论比较(SHAP/LIME等技术评估)、模型构建(集成学习与特征优化)、隐私保护(联邦学习结合XAI)到多源数据融合(文本与多维指标应用)五个维度展开。整体趋势表明,研究正从单纯追求预测准确率向构建透明、可信、符合监管要求且能处理复杂非结构化数据的智能财务监测系统转型。