扩散模型用于指纹生成
扩散模型驱动的指纹/潜在指纹合成(端到端与潜空间生成)
这组文献都以扩散模型为核心生成框架,聚焦“指纹(尤其是潜在/隐含指纹)合成”的端到端或潜空间扩散生成思路;共同点在于强调扩散模型在分布覆盖、稳定性以及生成真实度方面优于传统GAN,并围绕真实指纹/潜在指纹成像质量进行评估。
- Fingerprint Synthesis from Diffusion Models and Generative Adversarial Networks(Weizhong Tang, Diego Andre Figueroa Llamosas, Donglin Liu, K. Johnsson, A. Sopasakis, 2025, Lecture Notes in Networks and Systems)
- Diffusion Probabilistic Model Based End-to-End Latent Fingerprint Synthesis(Kejian Li, Xiao Yang, 2023, 2023 IEEE 4th International Conference on Pattern Recognition and Machine Learning (PRML))
- Exploring Latent Fingerprint Synthesis with Diffusion Probabilistic Models(Jingqiao Wang, Zicheng Zhang, Congying Han, 2024, Lecture Notes in Networks and Systems)
局部/细小指纹的inpainting扩散合成与数据增强
该论文专门提出面向“细小/局部指纹”的合成数据扩增路线,关键方法是inpainting扩散,并通过特征关键点掩码进行引导;共同点是把扩散用于解决数据稀缺与局部区域信息保持问题,同时通过生成数据提升下游任务(如去噪、去模糊、深度伪造检测)的泛化与匹配效果。
- Inpainting Diffusion Synthetic and Data Augment With Feature Keypoints for Tiny Partial Fingerprints(Mao-Hsiu Hsu, Yung-Ching Hsu, Ching-Te Chiu, 2025, IEEE Transactions on Biometrics, Behavior, and Identity Science)
文本条件/多模态可控扩散的生物成像生成框架
该组论文围绕“可控扩散”的条件化生成:用文本/生物语言提示实现跨模态(多模态)生物成像合成,并强调与生物学/识别相关条件的一致性对生成质量与身份相关性的影响;共同点是突出可控条件(prompt)与多模态数据生成能力。
- Controllable Diffusion Model for Generating Multimodal Biometric Images(Q. Nguyen, Hakil Kim, 2025, 2025 IEEE Conference on Artificial Intelligence (CAI))
可控扩散生成的综述与跨模态生物特征迁移(指纹方法脉络、掌纹/皱褶的可控框架)
这组文献在研究上形成“方法论脉络与跨生物特征扩散可控生成”的并列关系:其一为指纹生成方法综述,系统对比GAN与扩散并总结可控扩散优势;其余两篇分别把可控扩散扩展到掌纹与额头皱褶等其他特征模态,并引入身份一致/细节保真相关的损失与结构建模,从而为“扩散用于指纹/指纹相关指纹生成”提供可迁移的可控生成思路。
- Fingerprint Image Generation Using Deep Generative Models: From GANs to Diffusion Models(Boyu Zheng, 2026, ITM Web of Conferences)
- PalmDiff: When Palmprint Generation Meets Controllable Diffusion Model(Long Tang, Tingting Chai, Zheng Zhang, Miao Zhang, Xiangqian Wu, 2025, IEEE Transactions on Image Processing)
- CreaseGen: Generating realistic forehead-crease via B-splines, reinforcement learning and diffusion modeling(Abhishek Tandon, Ashutosh Sharma, Geetanjali Sharma, Gaurav Jaswal, Raghavendra Ramachandra, Aditya Nigam, 2025, Information Fusion)
整体来看,文献可分为四条主线:①基于(端到端或潜空间)扩散模型的指纹/潜在指纹合成;②面向局部/细小指纹的inpainting扩散合成与数据增强以缓解数据稀缺;③通过文本条件等方式实现可控扩散的多模态生物成像生成;④从指纹生成方法脉络出发,并将可控扩散思想迁移到掌纹、额头皱褶等模态,提供身份一致性与细节保真方面的通用设计启发。
总计8篇相关文献
… fingerprints in this article. Nevertheless, we will be able to present one set of synthetic fingerprint … 3 and present a number of synthetic fingerprints generated by our models in the Results …
… Nevertheless, Diffusion … fingerprint synthesis methods. In conclusion, our experimental findings underscore the success of the Diffusion Probabilistic Model for latent fingerprint synthesis…
Inpainting Diffusion Synthetic and Data Augment With Feature Keypoints for Tiny Partial Fingerprints
The advancement of fingerprint research within public academic circles has been trailing behind facial recognition, primarily due to the scarcity of extensive publicly available datasets, despite fingerprints being widely used across various domains. Recent progress has seen the application of deep learning techniques to synthesize fingerprints, predominantly focusing on large-area fingerprints within existing datasets. However, with the emergence of AIoT and edge devices, the importance of tiny partial fingerprints has been underscored for their faster and more cost-effective properties. Yet, there remains a lack of publicly accessible datasets for such fingerprints. To address this issue, we introduce publicly available datasets tailored for tiny partial fingerprints. Using advanced generative deep learning, we pioneer diffusion methods for fingerprint synthesis. By combining random sampling with inpainting diffusion guided by feature keypoints masks, we enhance data augmentation while preserving key features, achieving up to 99.1% recognition matching rate. To demonstrate the usefulness of our fingerprint images generated using our approach, we conducted experiments involving model training for various tasks, including denoising, deblurring, and deep forgery detection. The results showed that models trained with our generated datasets outperformed those trained without our datasets or with other synthetic datasets. This indicates that our approach not only produces diverse fingerprints but also improves the model’s generalization capabilities. Furthermore, our approach ensures confidentiality without compromise by partially transforming randomly sampled synthetic fingerprints, which reduces the likelihood of real fingerprints being leaked. The total number of generated fingerprints published in this article amounts to 818,077. Moving forward, we are ongoing updates and releases to contribute to the advancement of the tiny partial fingerprint field. The code and our generated tiny partial fingerprint dataset can be accessed at https://github.com/Hsu0623/Inpainting-Diffusion-Synthetic-and-Data-Augment-with-Feature-Keypoints-for-Tiny-Partial-Fingerprints.git
This paper reviews recent progress in fingerprint image generation using deep generative models, with a focus on Generative Adversarial Network (GAN)-based and diffusion-based approaches. Fingerprint data are essential for biometric recognition, but collecting large and diverse datasets is difficult, especially for latent fingerprints. Early methods based on physical or statistical modeling could not produce realistic textures or sufficient diversity. With the development of deep learning, GAN-based models such as FingerGAN, PrintsGAN, and lightweight GANs have significantly improved the realism of generated fingerprints by learning data distributions directly. These methods introduce techniques such as structural constraints, multi-stage generation, and improved loss functions to enhance image quality and stability. However, GAN-based models still suffer from problems such as training instability, mode collapse, and limited control over identity consistency. To address these issues, diffusion models have recently been introduced into fingerprint generation. By gradually denoising random noise, diffusion models can generate high-quality and diverse fingerprint images with better stability. Advanced diffusion frameworks further enable controllable generation, allowing users to adjust fingerprint attributes such as style, quality, and sensor type while preserving identity information. Overall, diffusion-based methods show strong potential to become the next generation of fingerprint synthesis techniques.
Fingerprints have been crucial evidence for law enforcement agencies for a long time. Though the rapidly developing deep learning has dramatically improved the performance of the latent fingerprint recognition algorithm, a fully automated latent fingerprint identification system is still far from meeting actual needs. One major issue is the lack of publicly available latent fingerprint databases. Recently, diffusion probabilistic models have emerged as state-of-the-art deep generative methods for image synthesis. These models have better distribution coverage and less mode collapse than the popular Generative Adversarial Networks. In this paper, we propose an end-to-end latent fingerprint synthetic approach based on the improved denoising diffusion probabilistic model. The proposed approach could simultaneously generate latent, rolled, and plain fingerprints of high visual realism. Several primary degradation factors, such as various background textures, limited area of ridge patterns, and structural noise, can be directly generated without any postprocessing, unlike existing methods. We conduct NFIQ2 and perceptual analysis in the experiments to evaluate the proposed approach. The results indicate that the quality and visual realism of the proposed synthetic fingerprints is similar to the natural ones, demonstrating the effectiveness of our approach.
High-quality, diversified, and large-scale datasets are crucial for creating reliable deep-learning models for biometric applications. Unfortunately, there is a shortage of well-labeled data. This paper introduces a text-conditional biometric imaging generation framework, addressing the complexities associated with multi-modality considerations. The proposed framework harnesses cutting-edge diffusion probabilistic models to produce multi-modal biometric images at high resolutions, seamlessly aligning with biometric language prompts. The experimental results unequivocally validate the efficacy of the proposed framework in generating a diverse array of highly realistic synthetic biometric images while consistently maintaining a commendable level of fidelity when juxtaposed with their respective reference datasets. The contributions of this study offer substantial potential for propelling advancements in biometric imaging research.
Due to its distinctive texture and intricate details, palmprint has emerged as a critical modality in biometric identity recognition. The absence of large-scale public palmprint datasets has substantially impeded the advancement of palmprint research, resulting in inadequate accuracy in commercial palmprint recognition systems. However, existing generative methods exhibit insufficient generalization, as the images they generate differ in specific ways from the conditional images. This paper proposes a method for generating palmprint images using a controllable diffusion model (PalmDiff), which addresses the issue of insufficient datasets by generating palmprint data, improving the accuracy of palmprint recognition. We introduce a diffusion process that effectively tackles the problems of excessive noise and loss of texture details commonly encountered in diffusion models. A linear attention mechanism is employed to enhance the backbone’s expressive capacity and reduce the computational complexity. To this end, we proposed an ID loss function to enable the diffusion model to generate palmprint images under the same identical space consistently. PalmDiff is compared with other generation methods in terms of both image quality and the enhancement of palmprint recognition performance. Experiments show that PalmDiff performs well in image generation, with an FID score of 13.311 on MPD and 18.434 on Tongji. Besides, PalmDiff has significantly improved various backbones for palmprint recognition compared to other generation methods.
We introduce CreaseGen, a novel trait-specific image synthesis framework for forehead-creases biometrics that integrates geometric modeling and reinforcement learning with diffusion-…
整体来看,文献可分为四条主线:①基于(端到端或潜空间)扩散模型的指纹/潜在指纹合成;②面向局部/细小指纹的inpainting扩散合成与数据增强以缓解数据稀缺;③通过文本条件等方式实现可控扩散的多模态生物成像生成;④从指纹生成方法脉络出发,并将可控扩散思想迁移到掌纹、额头皱褶等模态,提供身份一致性与细节保真方面的通用设计启发。