Signatured Fingermark Recognition Based on Deep Residual Network
Published in CCBR, 2021
The Multimodal Emotion Recognition (MER 2023) challenge aims to recognize emotion with audio, language, and visual signals, facilitating innovative technologies of affective computing. This paper presents our submission approach on the Semi-Supervised Learning Sub-Challenge (MER-SEMI). First, with large-scale unlabeled emotional videos, we train both image-based and video-based Masked Autoencoders to extract visual features, which termed as expression MAE (expMAE) for simplicity. The expMAE features are found to be largely complementary with other official baseline features. Second, since there is only a few labeled data, we use a classifier to generate pseudo labels for unlabeled videos which have high confidence for a certain category. In addition, we also explore several advanced large models for cross-feature extraction like CLIP, and apply factorized bilinear pooling (FBP) for multimodal feature fusion. Our methods finally achieved 88.55% in F1 score on MER-SEMI, ranking second place among all participating teams.
Please refer to the paper for more details. If citation is needed, please use the following BibTeX entry:
@InProceedings{10.1007/978-3-030-86608-2_24,
author="Zhang, Yongliang
and Zhang, Qiuyi
and Zou, Jiali
and Zhang, Weize
and Li, Xiang
and Chen, Mengting
and Lv, Yufan",
editor="Feng, Jianjiang
and Zhang, Junping
and Liu, Manhua
and Fang, Yuchun",
title="Signatured Fingermark Recognition Based on Deep Residual Network",
booktitle="Biometric Recognition",
year="2021",
publisher="Springer International Publishing",
address="Cham",
pages="213--220",
abstract="Traditional fingerprint recognition methods based on minutiae have shown great success on for high-quality fingerprint images. However, the accuracy rates are significantly reduced for signatured fingermark on the contract. This paper proposes a signatured fingermark recognition method based on deep learning. Firstly, the proposed method uses deep learning combined with domain knowledge to extract the minutiae of fingermark. Secondly, it searches and calibrates the texture region of interest (ROI). Finally, it builds a deep neural network based on residual blocks, and trains the model through Triplet Loss. The proposed method achieved an equal error rate (EER) of 0.0779 on the self-built database, which is far lower than the traditional methods. It also proves that this method can effectively reduce the labor and time costs during minutiae extraction.",
isbn="978-3-030-86608-2"
}
Recommended citation: Yongliang Zhang, Qiuyi Zhang, Jiali Zou, Weize Zhang, Xiang Li, Mengting Chen, Yufan Lv. (2023). "Signatured Fingermark Recognition Based on Deep Residual Network." Biometric Recognition: 15th Chinese Conference.
Download Paper