扩散模型用于指纹生成
基于DDPM的指纹合成基础研究与GANs性能对比
这组文献探讨了将去噪扩散概率模型(DDPM)应用于指纹生成的基本可行性,并将其与传统的生成对抗网络(GANs)在真实性、多样性和数据增强效果方面进行了系统对比。
- Fingerprint Synthesis from Diffusion Models and Generative Adversarial Networks(Weizhong Tang, Diego Andre Figueroa Llamosas, Donglin Liu, K. Johnsson, A. Sopasakis, 2025, No journal)
- DiffFinger: Advancing Synthetic Fingerprint Generation through Denoising Diffusion Probabilistic Models(Fred M. Grabovski, Lior Yasur, Yaniv Hacmon, Lior Nisimov, Stav Nimrod, 2024, ArXiv)
- Data augmentation-based enhanced fingerprint recognition using deep convolutional generative adversarial network and diffusion models(Yukai Liu, 2024, Applied and Computational Engineering)
- Enhancing Fingerprint Image Synthesis with GANs, Diffusion Models, and Style Transfer Techniques(W. Tang, D. Figueroa, D. Liu, K. Johnsson, A. Sopasakis, 2024, ArXiv)
针对特定指纹类型(潜指纹与残缺指纹)的扩散模型应用
这组文献专注于解决特定场景下的指纹生成问题,如缺乏标注数据的潜指纹(Latent Fingerprints)合成,以及针对AIoT设备的细微局部指纹(Tiny Partial Fingerprints)的修复与增强。
- Diffusion Probabilistic Model Based End-to-End Latent Fingerprint Synthesis(Kejian Li, Xiao Yang, 2023, 2023 IEEE 4th International Conference on Pattern Recognition and Machine Learning (PRML))
- Inpainting Diffusion Synthetic and Data Augment With Feature Keypoints for Tiny Partial Fingerprints(Mao-Hsiu Hsu, Yung-Ching Hsu, Ching-Te Chiu, 2025, IEEE Transactions on Biometrics, Behavior, and Identity Science)
融合信号处理与架构改进的高保真生成技术
这组文献通过引入小波变换(WPT)或改进噪声调度(Polynomial Noise Schedule)等技术手段,提升扩散模型在指纹脊线细节和局部特征提取上的精确度,以达到更高保真度的生成效果。
- DENOISING DIFFUSION PROBABILISTIC MODEL WITH WAVELET PACKET TRANSFORM FOR FINGERPRINT GENERATION(Li Chen, Yong Chan, 2024, Jordanian Journal of Computers and Information Technology)
可控指纹生成与多印象身份一致性保持
这组文献强调生成过程的可控性和身份一致性,旨在生成具有相同身份但不同印象(Intra-class variations)的指纹,并允许对指纹类别、传感器类型和质量等级进行人为干预。
- Universal Fingerprint Generation: Controllable Diffusion Model With Multimodal Conditions(Steven A. Grosz, Anil K. Jain, 2024, IEEE Transactions on Pattern Analysis and Machine Intelligence)
- Vikriti-ID: A Novel Approach For Real Looking Fingerprint Data-set Generation(Rishabh Shukla, Aditya Sinha, Vanshika Singh, Harkeerat Kaur, 2024, 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV))
扩散模型的可控性增强与推理效率优化理论
这组文献虽然属于通用的扩散模型研究,但为指纹生成提供了核心技术支撑,特别是ControlNet及其变体(Meta ControlNet, ControlNet-XS),探讨了如何通过轻量化架构和元学习实现对图像生成的精准、快速控制。
- Meta ControlNet: Enhancing Task Adaptation via Meta Learning(Junjie Yang, Jinze Zhao, Peihao Wang, Zhangyang Wang, Yingbin Liang, 2023, No journal)
- ControlNet-XS: Rethinking the Control of Text-to-Image Diffusion Models as Feedback-Control Systems(Denis Zavadski, Johann-Friedrich Feiden, Carsten Rother, 2023, No journal)
- ControlNet-XS: Designing an Efficient and Effective Architecture for Controlling Text-to-Image Diffusion Models(Denis Zavadski, Johann-Friedrich Feiden, Carsten Rother, 2023, ArXiv)
合成生物特征数据的可靠性评估与测试框架
该文献提出了一个统一的评估框架,用于测试合成生物特征(包括指纹)的可靠性,涵盖随机性、质量相似度、身份保持和几何多样性等关键指标,确保合成数据在隐私保护与功能有效性之间取得平衡。
- Data Reliability Testing Framework for Biometric Datasets Using Synthetic Iris and Fingerprint Images Generated via Deep Generative Models(Hyoungrae Kim, Hakil Kim, 2025, IEEE Access)
该组论文展示了扩散模型在指纹生成领域的全面演进:从最初作为GANs的替代方案以提升生成质量和多样性,发展到针对潜指纹、残缺指纹等特定任务的定制化开发。研究重点已从单纯的图像合成转向具备身份一致性的可控生成,并通过融合小波变换和优化ControlNet架构来提升细节精度与推理效率。同时,研究者也开始构建完善的评估框架,以确保合成指纹在算法训练和隐私保护中的可靠应用。
总计13篇相关文献
The utilization of synthetic data for fingerprint recognition has garnered increased attention due to its potential to alleviate privacy concerns surrounding sensitive biometric data. However, current methods for generating fingerprints have limitations in creating impressions of the same finger with useful intra-class variations. To tackle this challenge, we present GenPrint, a framework to produce fingerprint images of various types while maintaining identity and offering humanly understandable control over different appearance factors, such as fingerprint class, acquisition type, sensor device, and quality level. Unlike previous fingerprint generation approaches, GenPrint is not confined to replicating style characteristics from the training dataset alone: it enables the generation of novel styles from unseen devices without requiring additional fine-tuning. To accomplish these objectives, we developed GenPrint using latent diffusion models with multimodal conditions (text and image) for consistent generation of style and identity. Our experiments leverage a variety of publicly available datasets for training and evaluation. Results demonstrate the benefits of GenPrint in terms of identity preservation, explainable control, and universality of generated images. Importantly, the GenPrint-generated images yield comparable or even superior accuracy to models trained solely on real data and further enhances performance when augmenting the diversity of existing real fingerprint datasets.
The majority of contemporary fingerprint synthesis is based on the Generative Adversarial Network (GAN). Recently, the Denoising Diffusion Probabilistic Model (DDPM) has been demonstrated to be more effective than GAN in numerous scenarios, particularly in terms of diversity and fidelity. This research develops a model based on the enhanced DDPM for fingerprint generation. Specifically, the image is decomposed into sub-images of varying frequency sub-bands through the use of a wavelet packet transform (WPT). This method enables DDPM to operate at a more local and detailed level, thereby accurately obtaining the characteristics of the data. Furthermore, a polynomial noise schedule has been designed to replace the linear noise strategy, which can result in a smoother noise addition process. Experiments based on multiple metrics on the datasets SOCOFing and NIST4 demonstrate that the proposed model is superior to existing models.
This study explores the generation of synthesized fingerprint images using Denoising Diffusion Probabilistic Models (DDPMs). The significant obstacles in collecting real biometric data, such as privacy concerns and the demand for diverse datasets, underscore the imperative for synthetic biometric alternatives that are both realistic and varied. Despite the strides made with Generative Adversarial Networks (GANs) in producing realistic fingerprint images, their limitations prompt us to propose DDPMs as a promising alternative. DDPMs are capable of generating images with increasing clarity and realism while maintaining diversity. Our results reveal that DiffFinger not only competes with authentic training set data in quality but also provides a richer set of biometric data, reflecting true-to-life variability. These findings mark a promising stride in biometric synthesis, showcasing the potential of DDPMs to advance the landscape of fingerprint identification and authentication systems.
Fingerprints have been crucial evidence for law enforcement agencies for a long time. Though the rapidly developing deep learning has dramatically improved the performance of the latent fingerprint recognition algorithm, a fully automated latent fingerprint identification system is still far from meeting actual needs. One major issue is the lack of publicly available latent fingerprint databases. Recently, diffusion probabilistic models have emerged as state-of-the-art deep generative methods for image synthesis. These models have better distribution coverage and less mode collapse than the popular Generative Adversarial Networks. In this paper, we propose an end-to-end latent fingerprint synthetic approach based on the improved denoising diffusion probabilistic model. The proposed approach could simultaneously generate latent, rolled, and plain fingerprints of high visual realism. Several primary degradation factors, such as various background textures, limited area of ridge patterns, and structural noise, can be directly generated without any postprocessing, unlike existing methods. We conduct NFIQ2 and perceptual analysis in the experiments to evaluate the proposed approach. The results indicate that the quality and visual realism of the proposed synthetic fingerprints is similar to the natural ones, demonstrating the effectiveness of our approach.
No abstract available
Fingerprint recognition research faces significant challenges due to the limited availability of extensive and publicly available fingerprint databases. Existing databases lack a sufficient number of identities and fingerprint impressions, which hinders progress in areas such as Fingerprint-based access control. To address this challenge, we present Vikriti-ID, a synthetic fingerprint generator capable of generating unique fingerprints with multiple impressions. Using Vikriti-ID, we generated a large database containing 500000 unique fingerprints, each with 10 associated impressions. We then demonstrate the effectiveness of the database generated by Vikriti-ID by evaluating it for imposter-genuine score distribution and EER score. Apart from this we also trained a deep network to check the usability of data. We trained the network inspired from [13], on both Vikriti-ID generated data as well as public data. This generated data achieved an Equal Error Rate(EER) of 0.16%, AUC of 0.89%. This improvement is possible due to the limitations of existing publicly available data sets, which struggle in numbers or multiple impressions.
This paper presents a comprehensive data reliability testing framework for evaluating synthetic biometric data, addressing privacy concerns in fingerprint and iris recognition systems. This unified and modality-independent methodology establishes six quantitative metrics: randomness, quality similarity, attribute similarity, non-duplication, ID-preservation, and geometric diversity. The framework is implemented through a novel RD-Net architecture consisting of a Random Network for privacy protection and a Deterministic Network for maintaining essential biometric characteristics. Experiments using public datasets (FVC 2002, IITDelhi-Iris, and CASIA-Iris-V4) demonstrate that synthetic samples maintain high dissimilarity from source datasets while preserving their structural properties. The synthetic biometric data generated through the proposed Random Network and Deterministic Network architectures are evaluated using the data reliability testing framework, confirming distribution similarity with real data across all proposed metrics and achieving scores over 80. This approach offers a method for generating and evaluating synthetic biometric data that balances privacy protection with functional validity in biometric system development and testing.
The progress of fingerprint recognition applications encounters substantial hurdles due to privacy and security concerns, leading to limited fingerprint data availability and stringent data quality requirements. This article endeavors to tackle the challenges of data scarcity and data quality in fingerprint recognition by implementing data augmentation techniques. Specifically, this research employed two state-of-the-art generative models in the domain of deep learning, namely Deep Convolutional Generative Adversarial Network (DCGAN) and the Diffusion model, for fingerprint data augmentation. Generative Adversarial Network (GAN), as a popular generative model, effectively captures the features of sample images and learns the diversity of the sample images, thereby generating realistic and diverse images. DCGAN, as a variant model of traditional GAN, inherits the advantages of GAN while alleviating issues such as blurry images and mode collapse, resulting in improved performance. On the other hand, Diffusion, as one of the most popular generative models in recent years, exhibits outstanding image generation capabilities and surpasses traditional GAN in some image generation tasks. The experimental results demonstrate that both DCGAN and Diffusion can generate clear, high-quality fingerprint images, fulfilling the requirements of fingerprint data augmentation. Furthermore, through the comparison between DCGAN and Diffusion, it is concluded that the quality of fingerprint images generated by DCGAN is superior to the results of Diffusion, and DCGAN exhibits higher efficiency in both training and generating images compared to Diffusion.
We present novel approaches involving generative adversarial networks and diffusion models in order to synthesize high quality, live and spoof fingerprint images while preserving features such as uniqueness and diversity. We generate live fingerprints from noise with a variety of methods, and we use image translation techniques to translate live fingerprint images to spoof. To generate different types of spoof images based on limited training data we incorporate style transfer techniques through a cycle autoencoder equipped with a Wasserstein metric along with Gradient Penalty (CycleWGAN-GP) in order to avoid mode collapse and instability. We find that when the spoof training data includes distinct spoof characteristics, it leads to improved live-to-spoof translation. We assess the diversity and realism of the generated live fingerprint images mainly through the Fr\'echet Inception Distance (FID) and the False Acceptance Rate (FAR). Our best diffusion model achieved a FID of 15.78. The comparable WGAN-GP model achieved slightly higher FID while performing better in the uniqueness assessment due to a slightly lower FAR when matched against the training data, indicating better creativity. Moreover, we give example images showing that a DDPM model clearly can generate realistic fingerprint images.
No abstract available
The field of image synthesis has made tremendous strides forward in the last years. Besides defining the desired output image with text-prompts, an intuitive approach is to additionally use spatial guidance in form of an image, such as a depth map. In state-of-the-art approaches, this guidance is realized by a separate controlling model that controls a pre-trained image generation network, such as a latent diffusion model. Understanding this process from a control system perspective shows that it forms a feedback-control system, where the control module receives a feedback signal from the generation process and sends a corrective signal back. When analysing existing systems, we observe that the feedback signals are timely sparse and have a small number of bits. As a consequence, there can be long delays between newly generated features and the respective corrective signals for these features. It is known that this delay is the most unwanted aspect of any control system. In this work, we take an existing controlling network (ControlNet) and change the communication between the controlling network and the generation process to be of high-frequency and with large-bandwidth. By doing so, we are able to considerably improve the quality of the generated images, as well as the fidelity of the control. Also, the controlling network needs noticeably fewer parameters and hence is about twice as fast during inference and training time. Another benefit of small-sized models is that they help to democratise our field and are likely easier to understand. We call our proposed network ControlNet-XS. When comparing with the state-of-the-art approaches, we outperform them for pixel-level guidance, such as depth, canny-edges, and semantic segmentation, and are on a par for loose keypoint-guidance of human poses. All code and pre-trained models will be made publicly available.
Diffusion-based image synthesis has attracted extensive attention recently. In particular, ControlNet that uses image-based prompts exhibits powerful capability in image tasks such as canny edge detection and generates images well aligned with these prompts. However, vanilla ControlNet generally requires extensive training of around 5000 steps to achieve a desirable control for a single task. Recent context-learning approaches have improved its adaptability, but mainly for edge-based tasks, and rely on paired examples. Thus, two important open issues are yet to be addressed to reach the full potential of ControlNet: (i) zero-shot control for certain tasks and (ii) faster adaptation for non-edge-based tasks. In this paper, we introduce a novel Meta ControlNet method, which adopts the task-agnostic meta learning technique and features a new layer freezing design. Meta ControlNet significantly reduces learning steps to attain control ability from 5000 to 1000. Further, Meta ControlNet exhibits direct zero-shot adaptability in edge-based tasks without any finetuning, and achieves control within only 100 finetuning steps in more complex non-edge tasks such as Human Pose, outperforming all existing methods. The codes is available in https://github.com/JunjieYang97/Meta-ControlNet.
Inpainting Diffusion Synthetic and Data Augment With Feature Keypoints for Tiny Partial Fingerprints
The advancement of fingerprint research within public academic circles has been trailing behind facial recognition, primarily due to the scarcity of extensive publicly available datasets, despite fingerprints being widely used across various domains. Recent progress has seen the application of deep learning techniques to synthesize fingerprints, predominantly focusing on large-area fingerprints within existing datasets. However, with the emergence of AIoT and edge devices, the importance of tiny partial fingerprints has been underscored for their faster and more cost-effective properties. Yet, there remains a lack of publicly accessible datasets for such fingerprints. To address this issue, we introduce publicly available datasets tailored for tiny partial fingerprints. Using advanced generative deep learning, we pioneer diffusion methods for fingerprint synthesis. By combining random sampling with inpainting diffusion guided by feature keypoints masks, we enhance data augmentation while preserving key features, achieving up to 99.1% recognition matching rate. To demonstrate the usefulness of our fingerprint images generated using our approach, we conducted experiments involving model training for various tasks, including denoising, deblurring, and deep forgery detection. The results showed that models trained with our generated datasets outperformed those trained without our datasets or with other synthetic datasets. This indicates that our approach not only produces diverse fingerprints but also improves the model’s generalization capabilities. Furthermore, our approach ensures confidentiality without compromise by partially transforming randomly sampled synthetic fingerprints, which reduces the likelihood of real fingerprints being leaked. The total number of generated fingerprints published in this article amounts to 818,077. Moving forward, we are ongoing updates and releases to contribute to the advancement of the tiny partial fingerprint field. The code and our generated tiny partial fingerprint dataset can be accessed at https://github.com/Hsu0623/Inpainting-Diffusion-Synthetic-and-Data-Augment-with-Feature-Keypoints-for-Tiny-Partial-Fingerprints.git
该组论文展示了扩散模型在指纹生成领域的全面演进:从最初作为GANs的替代方案以提升生成质量和多样性,发展到针对潜指纹、残缺指纹等特定任务的定制化开发。研究重点已从单纯的图像合成转向具备身份一致性的可控生成,并通过融合小波变换和优化ControlNet架构来提升细节精度与推理效率。同时,研究者也开始构建完善的评估框架,以确保合成指纹在算法训练和隐私保护中的可靠应用。