CLIP4Sketch: Enhancing Sketch to Mugshot Matching through Dataset Augmentation using Diffusion Models (2024)

Kushal Kumar Jain
IIIT-Hyderabad
Gachibowli, Hyderabad
kushal.kumar@reserarch.iiit.ac.in
  Steven Grosz
Michigan State University
East Lansing, MI
groszste@msu.edu
  Anoop M. Namboodiri
IIIT-Hyderabad
Gachibowli, Hyderabad
anoop@iiit.ac.in
  Anil K. Jain
Michigan State University
East Lansing, MI
jain@msu.edu

Abstract

Forensic sketch-to-mugshot matching is a challenging task in face recognition, primarily hindered by the scarcity of annotated forensic sketches and the modality gap between sketches and photographs. To address this, we propose CLIP4Sketch, a novel approach that leverages diffusion models to generate a large and diverse set of sketch images, which helps in enhancing the performance of face recognition systems in sketch-to-mugshot matching. Our method utilizes Denoising Diffusion Probabilistic Models (DDPMs) to generate sketches with explicit control over identity and style. We combine CLIP and Adaface embeddings of a reference mugshot, along with textual descriptions of style, as the conditions to the diffusion model. We demonstrate the efficacy of our approach by generating a comprehensive dataset of sketches corresponding to mugshots and training a face recognition model on our synthetic data. Our results show significant improvements in sketch-to-mugshot matching accuracy over training on an existing, limited amount of real face sketch data, validating the potential of diffusion models in enhancing the performance of face recognition systems across modalities. We also compare our dataset with datasets generated using GAN-based methods to show its superiority.

1 Introduction

Facial recognition technology has witnessed remarkable advancements in recent years [7, 18, 19], yet challenges persist in accurately matching hand-drawn forensic sketches to corresponding mugshot photographs. This problem holds critical significance across various domains, including law enforcement, surveillance, and forensic investigations. Forensic sketch-to-mugshot matching presents inherent challenges stemming from the substantial discrepancy between these two data modalities. Forensic sketches, which are often incomplete artistic interpretations based on witness descriptions, exhibit stark differences from photographic mugshots captured under controlled conditions. Traditional approaches to this problem have primarily relied on discriminative frameworks that attempt to learn a shared subspace or extract specific features to enhance similarity between sketches and photos of the same identity while increasing dissimilarity across different identities [21, 22]. However, the efficacy of such methods has been fundamentally limited by the need for more annotated forensic sketch data available for training and evaluation. The lack of sketch data has hampered the ability to assess method performance properly and limited the potential of local feature-based discriminant analysis techniques and recognition systems. Recently, the growing concerns regarding the legal and ethical implications of using authentic data in facial recognition (FR) training, coupled with the practical challenges in assembling large, diverse face datasets, have spurred research into the potential of synthetic data as a viable alternative to privacy-sensitive real world face data [3].
To address the issue of scarcity of data, we propose a novel approach leveraging diffusion models for generating sketch images with large variations to enhance the performance of face recognition systems in sketch-to-mugshot matching. Diffusion models, characterized by their ability to generate high-quality samples from complex data distributions, offer a promising avenue for bridging the gap between sketches and photographs. Diffusion models [15, 40, 8] have made significant progress in text-guided image generation [35, 38, 36], which has helped them garner much attention in the last few years. Before the advent of diffusion models, face sketch generation techniques predominantly employed Generative Adversarial Networks (GANs) [13] trained on available datasets of photo-sketch pairs. However, GANs are susceptible to issues such as mode collapse, training instability, and limited generalization beyond the training data distribution, making them less suitable for the personalized generation of synthetic sketch datasets from mugshot images.
Our approach to match forensic sketches and mugshots uses traditional face recognition models trained on synthetic data generated using diffusion models. Inspired by IP-Adapter [45] and other similar approaches [23], we guide our generation with explicit control over both identity and style by using mugshot image embeddings as prompts. We aggregated data from multiple publicly available sketch datasets [9, 43, 22, 31] and integrated ControlNet for spatial control and CLIP, along with Adaface, for identity control. We call this generation pipeline CLIP4Sketch.
In a field where accuracy is paramount, having control over the generated sketches is essential. CLIP4Sketch offers a solution to this issue, providing researchers with a powerful tool for improving face recognition technology. The CLIP4Sketch pipeline leverages the robust feature extraction capabilities of the CLIP encoder and the high-fidelity embeddings of Adaface to create a diverse set of sketch and mugshot image pairs. This synthetic dataset was then employed to finetune an AdaFace [18] model, optimizing it to improve performance in sketch-to-face matching. Through rigorous training and testing, we validated the effectiveness of our dataset in enhancing the accuracy and reliability of the face recognition model. The results show the potential of our approach in significantly advancing the field of sketch-to-face recognition, demonstrating the practical benefits of the CLIP4Sketch pipeline in real-world applications. The contributions of this work can be summarised as :

CLIP4Sketch: Enhancing Sketch to Mugshot Matching through Dataset Augmentation using Diffusion Models (1)

2 Related Works

In this section we describe a few traditional methods for face to sketch matching and sketch generation.Sketch recognition has been a subject of extensive research, but forensic sketches present larger modality gaps due to inherent errors made by witnesses and lack of sufficient data. Methods for photo to sketch matching can be broadly divided into two categories: discriminative or generative. Discriminative methods try to reduce the modality gap between photos and sketches by learning a shared subspace or extracting specific features that enhance the similarity within the same identity while increasing dissimilarity between different identities. Generative methods try to generate sketches from photos or vice-versa and then use traditional face-recognition algorithms to solve the problem. Most of these methods suffer from insufficient data and hence are unable to reliably perform well on other datasets.

2.1 Photo to Sketch Matching

Traditionally for photo to sketch matching hand-crafted local features have been used followed by dimensionality reduction and discriminant analysis. A few such methods arePartial Linear Squares (PLS) [39], Coupled Information-Theoretic Projection (CITP) [49], Local Feature-based Discriminant Analysis (LFDA) [21] and Canonical Correlation Analysis (CCA) [29]. Additionally, component-based representation techniques [14] or Self-Similarity Descriptor (SSD) dictionary [26], Multi-scale Circular Weber’s Local Descriptor (MCWLD) and other similar approaches [2, 32], have demonstrated efficacy in addressing sketch recognition challenges. More recently researchers have used deep learning for feature extraction which has also improved results [33, 11, 17, 1, 27], and spawned hybrid approaches like [44], and [6].

2.2 Generative Approaches

The process of converting images from one modality to another, such as from sketches to photographs, through generative methods was a common strategy employed in photo-sketch recognition tasks. By transforming sketches into photos before matching, applying a straightforward hom*ogeneous face recognition method becomes feasible. Tang and Wang [41, 43] notably pioneered exemplar-based face sketch synthesis, leveraging a linear transformation assumption between face photos and sketches, using Markov random field models and later, local linear embedding (LLE) [24]. In [30], the authors exploit Gaussian Process regression to deal with both the memory gap and modality gap. However, the efficacy of these approaches in face recognition hinges significantly on the quality of their synthesis results.
Following the development of Generative Adversarial Networks (GANs) [13] and cGANs [25], Isola et al. proposed a novel method for general image to image translation using cGANs [16, 50], which gave rise to many sketch generation methods that use GAN’s. Recent methods like [12] use Dynamic Normalization (DySPADE) in the generator architecture along with depth maps to supervise the generation. Another recent method called Semi-Cycle-GAN (SCG) [5] proposed a semi-supervised approach with a noise-injection strategy to overcome the challenge of missing features in CycleGAN outputs.
In AP-DrawingGAN, Yi et al. [47] used dedicated GANs to generate difficult-to-sketch features like eyes, nose and lips. FSGAN [9] extends their approach and introduces a new dataset called FS2K, which has three styles and paired sketch examples. We use this dataset for training and comparison with other methods. In modern approaches like [4], the authors have used CLIP [34] along with a geometry-preserving loss to achieve a sketch style that respects the scene’s geometry.

3 Methodology

In this section, we describe our approach for generating a sketch dataset using diffusion models. Our methodology leverages adapters like ControlNet and IP-Adapter for latent diffusion models to create high-quality, diverse sketch images from mugshot photographs. The core of our approach, which we call CLIP4Sketch, combines the strengths of CLIP and AdaFace embeddings to ensure that generated sketches maintain the identity of the input image while allowing for stylistic variations guided by textual prompts. We begin by outlining the fundamental principles of diffusion models, then detail our specific implementation for sketch generation, including how we incorporate identity preservation and style control into the generation process.

3.1 An Outline of Diffusion Models

A T2I diffusion model consists of two key elements: the diffusion backbone (ϵθsubscriptitalic-ϵ𝜃\epsilon_{\theta}italic_ϵ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT) and the text encoder (cθsubscript𝑐𝜃c_{\theta}italic_c start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT). The diffusion backbone undertakes a systematic noise reduction process. Diffusion models operate by sampling noise from a distribution and then iteratively denoising the sample until reaching a final denoised sample, denoted as x0subscript𝑥0x_{0}italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. Each step in this process corresponds to a certain noise level, with xtsubscript𝑥𝑡x_{t}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT representing a blend of signal and noise, where the signal-to-noise ratio is determined by the timestep t𝑡titalic_t. The textual input (y𝑦yitalic_y) undergoes tokenization and index-based lookup, connecting it to text embeddings (Ftxtsubscript𝐹𝑡𝑥𝑡F_{txt}italic_F start_POSTSUBSCRIPT italic_t italic_x italic_t end_POSTSUBSCRIPT) represented as a sequence of vectors. These embeddings are then enhanced by the text encoder, which contextualizes the information, producing refined text embeddings (Etxtsubscript𝐸𝑡𝑥𝑡E_{txt}italic_E start_POSTSUBSCRIPT italic_t italic_x italic_t end_POSTSUBSCRIPT) that capture the essence, intention, and nuances of the textual description. Typically, T2I diffusion models utilize cross-attention layers to leverage the semantic information within Etxtsubscript𝐸𝑡𝑥𝑡E_{txt}italic_E start_POSTSUBSCRIPT italic_t italic_x italic_t end_POSTSUBSCRIPT. The objective function of T2I diffusion models is formulated as:

=𝔼x,y,ϵ,t[ϵθ(xt,t,cθ(Ftxt))ϵ22]subscript𝔼𝑥𝑦italic-ϵ𝑡delimited-[]superscriptsubscriptnormsubscriptitalic-ϵ𝜃subscript𝑥𝑡𝑡subscript𝑐𝜃subscript𝐹𝑡𝑥𝑡italic-ϵ22\mathcal{L}=\mathbb{E}_{x,y,\epsilon,t}\left[||\epsilon_{\theta}\left(x_{t},t,%c_{\theta}\left(F_{txt}\right)\right)-\epsilon||_{2}^{2}\right]caligraphic_L = blackboard_E start_POSTSUBSCRIPT italic_x , italic_y , italic_ϵ , italic_t end_POSTSUBSCRIPT [ | | italic_ϵ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_t , italic_c start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_F start_POSTSUBSCRIPT italic_t italic_x italic_t end_POSTSUBSCRIPT ) ) - italic_ϵ | | start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ]

3.2 Face-ID Preservation

Various strategies have emerged to preserve and personalize face identity of generated images of diffusion models. While some techniques, such as Textual Inversion [10] and DreamBooth [37], necessitate fine-tuning for each new concept, others like IP-Adapter [45] and PhotoMaker [23] offer the advantage of producing identity-consistent generations for multiple subjects without the need for inference time fine-tuning.
IP-Adapter and PhotoMaker distinguish themselves by embedding the identity of input reference images into the diffusion process through cross-attention layers. This mechanism directs the diffusion model to generate images consistent with the identities portrayed in the reference images, by using the reference image embedding as condition. However, during empirical evaluation, we found that IP-Adapter lacked the nuanced spatial control necessary to preserve facial identity adequately. For this reason, we decided to use a ControlNet [48] architecture along with an image prompt to achieve more fine-grained spatial control. ControlNet is a neural network structure designed to introduce conditional control to large pretrained text-to-image diffusion models. It leverages source models like LDM and reuses their deep encoding layers [36, 48], with the intention to construct a robust encoder capable of learning specific conditions like canny edge maps, depth maps and so on. We use a canny edge detector to generate an edge map of the mugshot which we pass in the diffusion pipeline using the ControlNet adapter.

3.3 CLIP4Sketch Pipeline

The CLIP4Sketch pipeline begins by taking a mugshot image, denoted as Xmsubscript𝑋𝑚X_{m}italic_X start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT, and processing it through a series of steps to generate a corresponding sketch while preserving the identity and introducing various styles. We take inspiration from the decoupled cross attention layer design of IP-Adapter [45] to synthesize sketches with fine-grained control over both the identity and stylistic attributes.
We start by encoding the input mugshot image using a pre-trained CLIP model. This model is referred to as Eclipsubscript𝐸clipE_{\text{clip}}italic_E start_POSTSUBSCRIPT clip end_POSTSUBSCRIPT, which provides an embedding that captures high level semantic information. Next, in order to ensure that the identity of the individual in the mugshot is preserved in the generated sketches, the pipeline incorporates AdaFace, a robust face recognition model. We denote this model as Efrsubscript𝐸frE_{\text{fr}}italic_E start_POSTSUBSCRIPT fr end_POSTSUBSCRIPT. The embedding obtained from AdaFace, Efr(Xm)subscript𝐸frsubscript𝑋𝑚E_{\text{fr}}(X_{m})italic_E start_POSTSUBSCRIPT fr end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ), refines the generated sketch to align closely with the identity characteristics of the input mugshot. The pipeline then concatenates the embeddings from CLIP and AdaFace, resulting in a combined embedding vector, denoted as C𝐶Citalic_C. We observed empirically that using any of the above-mentioned embeddings by itself does not yield good results; by combining the embeddings from both CLIP and AdaFace, we ensure that the sketches maintain critical identity features along with the style mentioned in textual prompts. This can be represented as:

C=[Eclip(Xm),Efr(Xm)]𝐶subscript𝐸clipsubscript𝑋𝑚subscript𝐸frsubscript𝑋𝑚C=[E_{\text{clip}}(X_{m}),E_{\text{fr}}(X_{m})]italic_C = [ italic_E start_POSTSUBSCRIPT clip end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) , italic_E start_POSTSUBSCRIPT fr end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) ]

These combined embeddings are transformed to the same latent space as the textual embeddings using a Projection Network (P𝑃Pitalic_P).For style variations, the pipeline utilizes textual captions to guide the stylistic output of the generator. A few such example captions are : “a software generated sketch of a person”, “a hand drawn sketch of a person”. These captions are processed to generate textual embeddings using the CLIP model’s text encoder, denoted as Etextsubscript𝐸textE_{\text{text}}italic_E start_POSTSUBSCRIPT text end_POSTSUBSCRIPT. The textual embeddings, Etext(prompts)subscript𝐸textpromptsE_{\text{text}}(\text{prompts})italic_E start_POSTSUBSCRIPT text end_POSTSUBSCRIPT ( prompts ), are appended to the combined embeddings (P(C)𝑃𝐶P(C)italic_P ( italic_C )) and passed through the cross-attention layers of our generator. Lets say ϕitalic-ϕ\phiitalic_ϕ are weights of image cross-attention layer and ψ𝜓\psiitalic_ψ are weights of text cross-attention layer, while θ𝜃\thetaitalic_θ is the combined weights of the T2I diffusion model. We keep ψ𝜓\psiitalic_ψ and θ𝜃\thetaitalic_θ frozen while only training ϕitalic-ϕ\phiitalic_ϕ and our projection network P𝑃Pitalic_P.The diffusion model, which is responsible for generating the sketches, utilizes these final embeddings as condition. We also give a canny edge map to the model as the ControlNet input.
By inputting different textual descriptions, the cross-attention layers within the diffusion model adjust the sketch generation process to reflect the desired style. This flexibility enables the creation of diverse sketches from the same mugshot, each with unique stylistic elements while preserving the underlying identity. Through this process, CLIP4Sketch effectively bridges the modality gap between photographs and sketches, allowing us to generate a large sketch dataset. So, we generated a synthetic dataset using the proposed CLIP4Sketch, which includes 245,376 sketches of 27,264 identities in four different styles. This synthetic data was crucial in enhancing the our face recognition model’s ability to generalize across different sketch styles. Our train-test split strategy ensured that the training set was comprehensive and diverse while the test set accurately reflected real-world scenarios, thereby improving the robustness of our results. In the next few sections, we analyse the potential of our dataset in enhancing the performance of face recognition systems in forensic sketch-to-mugshot matching tasks.

CLIP4Sketch: Enhancing Sketch to Mugshot Matching through Dataset Augmentation using Diffusion Models (2)
DatasetDescriptionTrain/TestNo. of ImagesNo. of Identities
CUHK [43]Hand drawn viewed sketches.Train3,6001,800
PRIP-Composites [22]Viewed and non-viewed sketch composites with mugshot mates. Contains software generated and hand drawn sketches.Train1,112556
FS2K [9]Artistic sketch pairs in 3 styles.Train4,2082,104
TUFTS [31]Face database for computerized sketches.Test224112
IIIT-D [2]Forensic and semi forensic pairs of sketches.Test14472
WildSketch [28]Artistic photo sketch pairsTest1,492796
Our Synthetic DatasetMugshot and sketches in 4 styles.Train245,37627,264

CLIP4Sketch: Enhancing Sketch to Mugshot Matching through Dataset Augmentation using Diffusion Models (3)

CLIP4Sketch: Enhancing Sketch to Mugshot Matching through Dataset Augmentation using Diffusion Models (4)

CLIP4Sketch: Enhancing Sketch to Mugshot Matching through Dataset Augmentation using Diffusion Models (5)

4 Experiments

To evaluate the effectiveness of our CLIP4Sketch approach and the quality of our synthetically generated sketch dataset, we conducted a series of comprehensive experiments. These experiments were designed to assess multiple aspects of our method, including the realism of the generated sketches, the impact of increasing synthetic data on model performance, and comparisons with existing GAN-based approaches. We also investigated the model’s performance in both open-set and closed-set scenarios to provide a thorough assessment of its capabilities. Throughout our experiments, we used various datasets for training and testing, including our synthetic dataset and several publicly available sketch and mugshot datasets. The following subsections detail our experimental setup, methodologies, and findings, providing insights into the strengths and limitations of our approach in enhancing sketch-to-mugshot matching performance.

4.1 Implementation Details

We conducted our experiments by finetuning pretrained AdaFace models using different datasets. We preprocess the dataset by cropping and aligning faces, resulting in 112 × 112 images. The three augmentations proposed in AdaFace [18] were also applied with a probability of 0.2. We use Adam optimizer [20] and a decaying learning rate strategy. It takes us 10 epochs to finetune the model with a batch size of 64. The training was done on 8 A6000 GPU’s.
Datasets : We use multiple sketch and mugshot datasets to train our sketch generation model, a summary of the datasets used can be found in the table 1. The publicly available datasets included the CUHK Face Sketch Database (CUHK) [43], PRIP-Composites [22], FS2K [9], TUFTS Face Database [31], IIIT-D Forensic Sketch Database [2], and WildSketch [28]. The CUHK, PRIP-Composites, and FS2K datasets, comprising a total of 8,912 sketches representing 4,460 unique identities, were used exclusively for training the CLIP4Sketch pipeline. For testing the FR system, we employed the TUFTS Face Database, IIIT-D Forensic Sketch Database, and WildSketch, collectively offering 1860 sketches of 980 identities, to provide a realistic evaluation of our model’s performance.

Model & DatasetTAR (%) @ FAR=0.1%
CASIAWildSketchIIIT-D SketchesTUFTS
Pretrained Adaface94.1066.1055.8014.41
Adaface finetuned on 25% of synthetic data53.20 ±1.5plus-or-minus1.5\pm 1.5± 1.582.70 ±2.28plus-or-minus2.28\pm{2.28}± 2.2844.80 ±5.60plus-or-minus5.60\pm 5.60± 5.6018.44 ±3.50plus-or-minus3.50\pm 3.50± 3.50
Adaface finetuned on 50% of synthetic data46.00 ±3.3plus-or-minus3.3\pm 3.3± 3.384.03 ±1.25plus-or-minus1.25\pm 1.25± 1.2549.61 ±7.70plus-or-minus7.70\pm 7.70± 7.7019.11 ±4.30plus-or-minus4.30\pm 4.30± 4.30
Adaface finetuned on 75% of synthetic data39.40 ±1.5plus-or-minus1.5\pm 1.5± 1.586.67 ±1.83plus-or-minus1.83\pm 1.83± 1.8360.5 ±5.40plus-or-minus5.40\pm 5.40± 5.4019.40 ±3.68plus-or-minus3.68\pm 3.68± 3.68
Adaface finetuned on 100% of synthetic data35.1085.8858.8020.75

Model & DatasetClosed-set Rank-1 (%)Open-set FNIR (%) @FPIR=2%
Pretrained Adaface7251
Adaface finetuned on Face-Sketch-SCG [5]1497
Adaface finetuned on InformativeDrawings [4]5988
Adaface finetuned on our dataset8533

4.2 Realism of Generated Sketches

To validate the realism and quality of our CLIP4Sketch generated sketches, we conducted a comprehensive comparative analysis between the generated sketches and real forensic sketch images. This analysis focused on two key aspects: the distribution of similarity scores and feature space representation. Using a pre-trained AdaFace [18] network, we extracted features from both sets of sketches and calculated scores for genuine matches (sketches matched with their corresponding mugshots) and imposter matches (sketches matched with non-corresponding mugshots). Our analysis revealed that the score distributions of the generated sketches closely resemble those of the real forensic sketches, indicating a high level of realism in the synthetic images.
Additionally, we use t-SNE [42] to visualize the similarity between generated and real sketches in the VGG latent space. Our t-SNE plots demonstrated that the clusters of generated sketches were highly similar to those of real sketches. This clustering behaviour indicates that our CLIP4Sketch-generated sketches share similar feature characteristics with real forensic sketches, further reinforcing our quantitative findings from the similarity score analysis. In figure 3 the t-SNE visualization effectively validated the realism of our DDPM-generated sketches, supporting their potential use in improving sketch-to-mugshot matching performance.

4.3 Effect of Size of Synthetic Dataset

To investigate the impact of increasing synthetic data on sketch-to-face matching performance, we conducted an experiment varying the proportion of synthetic data in our training dataset. We split the dataset into four compositions: 25%, 50%, 75%, and 100% synthetic data, corresponding to 6,816, 13,632, 20,448, and 27,264 unique identities, respectively. We evaluated the impact of increasing synthetic data on both sketch-to-face matching and face-to-face matching performance. For face-to-face matching, we used the CASIAWebFace test set [46] to assess how the increase in synthetic data affects the model’s performance on traditional face recognition tasks.
Our quantitative analysis revealed interesting trends. As the proportion of synthetic data increased, we observed a general improvement in sketch-to-face matching performance across multiple test datasets (WildSketch, IIIT-D Sketches, and TUFTS). This suggests that the synthetic data effectively helps the model learn features that bridge the gap between sketch and photo domains.However, we also noted a decline in face-to-face matching scores on the CASIA dataset as the proportion of synthetic data increased. This decrease highlights a trade-off: while finetuning with more synthetic data improves sketch-to-face matching, it may lead to a degradation in the model’s ability to recognize real faces in traditional face recognition tasks.These findings show the complex relationship between quantity of cross modality data while finetuning and general model performance. While synthetic sketch data is crucial for improving performance on specialized tasks like sketch-to-face matching, increasing its proportion may impact general face recognition capabilities. This suggests that in practical applications, careful consideration should be given to the balance of synthetic and real data in the training set, depending on the specific requirements of the task at hand.

CLIP4Sketch: Enhancing Sketch to Mugshot Matching through Dataset Augmentation using Diffusion Models (6)

CLIP4Sketch: Enhancing Sketch to Mugshot Matching through Dataset Augmentation using Diffusion Models (7)

CLIP4Sketch: Enhancing Sketch to Mugshot Matching through Dataset Augmentation using Diffusion Models (8)

4.4 Comparison with GAN-Generated Datasets

In this experiment, we compare the performance of the face recognition model trained on our dataset generated using the proposed CLIP4Sketch with models trained on datasets generated by InformativeDrawings [4] and Face-Sketch SCG [5]. We use their Anime-Sketch style for our evaluations. Specifically, we evaluate the models on both open-set and closed-set scenarios to assess their performance comprehensively. Our gallery consists of 10,000 mugshots, and the probes consist of the test datasets mentioned in the datasets table 1. As shown in Table 3, our model trained on the DDPM-generated dataset outperformed the model trained on the Informative Drawings dataset in both open-set and closed-set scenarios. Specifically, our model achieved an open-set FNIR of 33% and a closed-set rank1 accuracy of 85%, compared to 88% and 59%, respectively, for Informative Drawings [4], 97% and 14% for Face-SCG [5]. Though there is much scope for improvement, these results show the potential of synthetic datasets in cross-modal face recognition.

4.5 Failure Cases

Despite the significant improvements achieved by CLIP4Sketch in enhancing sketch-to-mugshot matching performance, there remain certain cases as shown in figure 6, where our face recognition model, trained on the synthetic sketch dataset, struggles to accurately match faces with forensic sketches. These failure cases highlight the inherent challenges posed by the substantial modality gap between photographs and forensic sketches, especially in IIIT-D [2] and TUFTS [31] datasets. This suggests that simply increasing the dataset size alone may not fully bridge this divide.Forensic sketches, particularly those created by witnesses or forensic artists based on recollections and descriptions, can deviate significantly from the subject’s actual facial features and proportions. These deviations introduce complexities that our model may only sometimes be able to overcome, even with the aid of the synthetic data.In contrast, our model performs better when matching artistic sketches, which often adhere more closely to the actual facial structures and proportions. However, the real challenge lies in the forensic domain, where sketches can be highly subjective and prone to distortions.

CLIP4Sketch: Enhancing Sketch to Mugshot Matching through Dataset Augmentation using Diffusion Models (9)

5 Conclusion

In this work, we introduced CLIP4Sketch, a novel approach leveraging Denoising Diffusion Probabilistic Models (DDPMs) tailored to generate diverse sketches from mugshot images. By combining the strengths of CLIP and AdaFace embeddings, our method ensures that the generated sketches retain the identity of the input image while allowing for diverse stylistic variations guided by textual captions.Our extensive experimental evaluations demonstrate that incorporating synthetic sketches generated by CLIP4Sketch into sketch training datasets significantly enhances the performance of face recognition models in sketch-to-mugshot matching tasks. The utilization of synthetic data in this context not only addresses the quantitative needs of modern FR evaluation but also alleviates privacy concerns associated with large-scale collection and use of authentic facial images. Moving forward, this approach can be extended to other domains requiring cross-modal image generation and matching, paving the way for more versatile and practical solutions in facial recognition technology.

References

  • [1]S.Bae, N.U. Din, H.J. Park, and J.Yi.Exploiting an intermediate latent space between photo and sketch for face photo-sketch recognition.Sensors (Basel, Switzerland), 22, 2022.
  • [2]H.S. Bhatt, S.Bharadwaj, R.Singh, and M.Vatsa.Memetically optimized mcwld for matching sketches with digital face images.IEEE Transactions on Information Forensics and Security, 7:1522–1535, 2012.
  • [3]F.Boutros, V.Struc, J.Fierrez, and N.Damer.Synthetic data for face recognition: Current state and future prospects.Image and Vision Computing, 135:104688, 2023.
  • [4]C.Chan, F.Durand, and P.Isola.Learning to generate line drawings that convey geometry and semantics.In CVPR, 2022.
  • [5]C.Chen, W.Liu, X.Tan, and K.Wong.Semi-supervised cycle-gan for face photo-sketch translation in the wild.In Computer Vision and Image Understanding (CVIU), 2023.
  • [6]T.Chugh, M.Singh, S.Nagpal, R.Singh, and M.Vatsa.Transfer learning based evolutionary algorithm for composite face sketch recognition.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 117–125, 2017.
  • [7]J.Deng, J.Guo, J.Yang, N.Xue, I.Kotsia, and S.Zafeiriou.Arcface: Additive angular margin loss for deep face recognition.IEEE Transactions on Pattern Analysis and Machine Intelligence, 44:5962–5979, 2018.
  • [8]P.Dhariwal and A.Q. Nichol.Diffusion models beat GANs on image synthesis.In A.Beygelzimer, Y.Dauphin, P.Liang, and J.W. Vaughan, editors, Advances in Neural Information Processing Systems, 2021.
  • [9]D.-P. Fan, Z.Huang, P.Zheng, H.Liu, X.Qin, and L.V. Gool.Facial-sketch synthesis: A new challenge.Machine Intelligence Research, 19:257–287, 2021.
  • [10]R.Gal, Y.Alaluf, Y.Atzmon, O.Patashnik, A.H. Bermano, G.Chechik, and D.Cohen-Or.An image is worth one word: Personalizing text-to-image generation using textual inversion.ArXiv, abs/2208.01618, 2022.
  • [11]C.Galea and R.A. Farrugia.Forensic face photo-sketch recognition using a deep learning-based architecture.IEEE Signal Processing Letters, 24:1586–1590, 2017.
  • [12]F.Gao, Y.Zhu, C.Jiang, and N.Wang.Human-inspired facial sketch synthesis with dynamic adaptation.In Proceedings of the International Conference on Computer Vision (ICCV), 2023.
  • [13]I.J. Goodfellow, J.Pouget-Abadie, M.Mirza, B.Xu, D.Warde-Farley, S.Ozair, A.C. Courville, and Y.Bengio.Generative adversarial nets.In Neural Information Processing Systems, 2014.
  • [14]H.Han, B.Klare, K.Bonnen, and A.K. Jain.Matching composite sketches to face photos: A component-based approach.IEEE Transactions on Information Forensics and Security, 8:191–204, 2013.
  • [15]J.Ho, A.Jain, and P.Abbeel.Denoising diffusion probabilistic models.ArXiv, abs/2006.11239, 2020.
  • [16]P.Isola, J.-Y. Zhu, T.Zhou, and A.A. Efros.Image-to-image translation with conditional adversarial networks.CVPR, 2017.
  • [17]H.Kazemi, S.Soleymani, A.Dabouei, S.M. Iranmanesh, and N.M. Nasrabadi.Attribute-centered loss for soft-biometrics guided face sketch-photo recognition.2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 612–6128, 2018.
  • [18]M.Kim, A.K. Jain, and X.Liu.Adaface: Quality adaptive margin for face recognition.2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 18729–18738, 2022.
  • [19]M.Kim, F.Liu, A.Jain, and X.Liu.Dcface: Synthetic face generation with dual condition diffusion model.2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12715–12725, 2023.
  • [20]D.P. Kingma and J.Ba.Adam: A method for stochastic optimization.CoRR, abs/1412.6980, 2014.
  • [21]B.Klare, Z.Li, and A.K. Jain.Matching forensic sketches to mug shot photos.IEEE Transactions on Pattern Analysis and Machine Intelligence, 33:639–646, 2011.
  • [22]S.J. Klum, H.Han, B.F. Klare, and A.K. Jain.The facesketchid system: Matching facial composites to mugshots.IEEE Transactions on Information Forensics and Security, 9(12):2248–2263, 2014.
  • [23]Z.Li, M.Cao, X.Wang, Z.Qi, M.-M. Cheng, and Y.Shan.Photomaker: Customizing realistic human photos via stacked id embedding.In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024.
  • [24]Q.Liu, X.Tang, H.Jin, H.Lu, and S.Ma.A nonlinear approach for face sketch synthesis and recognition.2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 1:1005–1010 vol. 1, 2005.
  • [25]M.Mirza and S.Osindero.Conditional generative adversarial nets.ArXiv, abs/1411.1784, 2014.
  • [26]P.Mittal, A.Jain, G.Goswami, R.Singh, and M.Vatsa.Recognizing composite sketches with digital face images via ssd dictionary.IEEE International Joint Conference on Biometrics, pages 1–6, 2014.
  • [27]S.Nagpal, M.Singh, R.Singh, M.Vatsa, A.Noore, and A.Majumdar.Face sketch matching via coupled deep transform learning.2017 IEEE International Conference on Computer Vision (ICCV), pages 5429–5438, 2017.
  • [28]L.Nie, L.Liu, Z.Wu, and W.Kang.Unconstrained face sketch synthesis via perception-adaptive network and a new benchmark.Neurocomputing, 2022.
  • [29]S.Ouyang, T.M. Hospedales, Y.-Z. Song, and X.Li.Cross-modal face matching: Beyond viewed sketches.In Asian Conference on Computer Vision, 2014.
  • [30]S.Ouyang, T.M. Hospedales, Y.-Z. Song, and X.Li.Forgetmenot: Memory-aware forensic facial sketch matching.2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5571–5579, 2016.
  • [31]K.Panetta, Q.Wan, S.Agaian, S.Rajeev, S.Kamath, R.Rajendran, S.P. Rao, A.Kaszowska, H.A. Taylor, A.Samani, and X.Yuan.A comprehensive database for benchmarking imaging systems.IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(3):509–520, 2020.
  • [32]C.Peng, X.Gao, N.Wang, and J.Li.Graphical representation for heterogeneous face recognition.IEEE Transactions on Pattern Analysis and Machine Intelligence, 39:301–312, 2015.
  • [33]C.Peng, N.Wang, J.Li, and X.Gao.Dlface: Deep local descriptor for cross-modality face recognition.Pattern Recognit., 90:161–171, 2019.
  • [34]A.Radford, J.W. Kim, C.Hallacy, A.Ramesh, G.Goh, S.Agarwal, G.Sastry, A.Askell, P.Mishkin, J.Clark, G.Krueger, and I.Sutskever.Learning transferable visual models from natural language supervision.In International Conference on Machine Learning, 2021.
  • [35]A.Ramesh, P.Dhariwal, A.Nichol, C.Chu, and M.Chen.Hierarchical text-conditional image generation with clip latents.ArXiv, abs/2204.06125, 2022.
  • [36]R.Rombach, A.Blattmann, D.Lorenz, P.Esser, and B.Ommer.High-resolution image synthesis with latent diffusion models.2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10674–10685, 2021.
  • [37]N.Ruiz, Y.Li, V.Jampani, Y.Pritch, M.Rubinstein, and K.Aberman.Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation.2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 22500–22510, 2022.
  • [38]C.Saharia, W.Chan, S.Saxena, L.Li, J.Whang, E.L. Denton, S.K.S. Ghasemipour, B.K. Ayan, S.S. Mahdavi, R.G. Lopes, T.Salimans, J.Ho, D.J. Fleet, and M.Norouzi.Photorealistic text-to-image diffusion models with deep language understanding.ArXiv, abs/2205.11487, 2022.
  • [39]A.Sharma and D.W. Jacobs.Bypassing synthesis: Pls for face recognition with pose, low-resolution and sketch.CVPR 2011, pages 593–600, 2011.
  • [40]J.Song, C.Meng, and S.Ermon.Denoising diffusion implicit models.ArXiv, abs/2010.02502, 2020.
  • [41]X.Tang and X.Wang.Face sketch synthesis and recognition.In Proceedings Ninth IEEE International Conference on Computer Vision, pages 687–694 vol.1, 2003.
  • [42]L.vander Maaten and G.E. Hinton.Visualizing data using t-sne.Journal of Machine Learning Research, 9:2579–2605, 2008.
  • [43]X.Wang and X.Tang.Face photo-sketch synthesis and recognition.IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(11):1955–1967, 2009.
  • [44]J.Xu, X.Xue, Y.Wu, and X.Mao.Matching a composite sketch to a photographed face using fused hog and deep feature models.The Visual Computer, 37:765–776, 2020.
  • [45]H.Ye, J.Zhang, S.Liu, X.Han, and W.Yang.Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models.ArXiv, abs/2308.06721, 2023.
  • [46]D.Yi, Z.Lei, S.Liao, and S.Li.Learning face representation from scratch.ArXiv, abs/1411.7923, 2014.
  • [47]R.Yi, Y.-J. Liu, Y.-K. Lai, and P.L. Rosin.Apdrawinggan: Generating artistic portrait drawings from face photos with hierarchical gans.2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10735–10744, 2019.
  • [48]L.Zhang, A.Rao, and M.Agrawala.Adding conditional control to text-to-image diffusion models.IEEE International Conference on Computer Vision (ICCV), 2023.
  • [49]W.Zhang, X.Wang, and X.Tang.Coupled information-theoretic encoding for face photo-sketch recognition.CVPR 2011, pages 513–520, 2011.
  • [50]J.-Y. Zhu, T.Park, P.Isola, and A.A. Efros.Unpaired image-to-image translation using cycle-consistent adversarial networks.2017 IEEE International Conference on Computer Vision (ICCV), pages 2242–2251, 2017.
CLIP4Sketch: Enhancing Sketch to Mugshot Matching through Dataset Augmentation using Diffusion Models (2024)
Top Articles
شباب المحمدية الجيش الملكي شاهد البث المباشر عبر الإنترنت جو 4 كورة – go4kora — كوره 365 بث مباشر مباريات اليوم 30 غشت 2024 | Spotlight Faith Grou
شباب المحمدية الجيش الملكي شاهد البث المباشر عبر الإنترنت كورة اون لاين kora online - أهم مباريات اليوم يلا كورة تيفي 30 غشت 2024 تلفزيون الرياضة
Friskies Tender And Crunchy Recall
Victor Spizzirri Linkedin
Walgreens Pharmqcy
Regal Amc Near Me
Uihc Family Medicine
More Apt To Complain Crossword
Buckaroo Blog
Youtube Combe
Simple Steamed Purple Sweet Potatoes
Cooking Fever Wiki
Animal Eye Clinic Huntersville Nc
Echo & the Bunnymen - Lips Like Sugar Lyrics
Morgan And Nay Funeral Home Obituaries
Beebe Portal Athena
Ups Access Point Lockers
The Pretty Kitty Tanglewood
Mccain Agportal
Wsop Hunters Club
Amortization Calculator
Teen Vogue Video Series
Dcf Training Number
College Basketball Picks: NCAAB Picks Against The Spread | Pickswise
Drug Test 35765N
Vivaciousveteran
Arrest Gif
Is Holly Warlick Married To Susan Patton
4.231 Rounded To The Nearest Hundred
Stickley Furniture
Kuttymovies. Com
Loopnet Properties For Sale
Devargasfuneral
Senior Houses For Sale Near Me
Devin Mansen Obituary
8005607994
USB C 3HDMI Dock UCN3278 (12 in 1)
Mvnt Merchant Services
2700 Yen To Usd
Topos De Bolos Engraçados
Prior Authorization Requirements for Health Insurance Marketplace
Dispensaries Open On Christmas 2022
Fluffy Jacket Walmart
Theater X Orange Heights Florida
Is TinyZone TV Safe?
Craigslist Com Brooklyn
Hampton Inn Corbin Ky Bed Bugs
Nfsd Web Portal
Otter Bustr
Raley Scrubs - Midtown
Cognitive Function Test Potomac Falls
Obituaries in Westchester, NY | The Journal News
Latest Posts
Article information

Author: Fredrick Kertzmann

Last Updated:

Views: 6194

Rating: 4.6 / 5 (66 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Fredrick Kertzmann

Birthday: 2000-04-29

Address: Apt. 203 613 Huels Gateway, Ralphtown, LA 40204

Phone: +2135150832870

Job: Regional Design Producer

Hobby: Nordic skating, Lacemaking, Mountain biking, Rowing, Gardening, Water sports, role-playing games

Introduction: My name is Fredrick Kertzmann, I am a gleaming, encouraging, inexpensive, thankful, tender, quaint, precious person who loves writing and wants to share my knowledge and understanding with you.