BézierPalm: A Free Lunch for Palmprint Recognition

1. Tencent Youtu Lab, 2. UCLA, 3. Hefei University of Technology, 4. Shanghai Jiaotong University.


Palmprints are private and stable information for biometric recognition. In the deep learning era, the development of palmprint recognition is limited by the lack of sufficient training data. In this paper, by observing that palmar creases are the key information to deep-learning-based palmprint recognition, we propose to synthesize training data by manipulating palmar creases. Concretely, we introduce an intuitive geometric model which represents palmar creases with parameterized Bézier curves. By randomly sampling Bézier parameters, we can synthesize massive training samples of diverse identities, which enables us to pretrain large-scale palmprint recognition models. Experimental results demonstrate that such synthetically pretrained models have a very strong generalization ability: they can be efficiently transferred to real datasets, leading to significant performance improvements on palmprint recognition. For example, under the open-set protocol, our method improves the strong ArcFace baseline by more than 10% in terms of TAR@1e-6. And under the closed-set protocol, our method reduces the equal error rate (EER) by an order of magnitude.


ECCV2022 slides and presentation:

What's new

Synthesized Examples:

Synthesize in 2D:

The figure below provides some synthesized samples, each row contains sample of the same identity.

Synthesize in 3D:

Coming soon.

Experimental Results

TAR@FAR curve on public datasets.

TAR@FAR curve of different methods. AF, MF, R50 denotes ArcFace, MobileFaceNet and ResNet-50. Hover (desktop) or click (mobile) to see numbers.

Our method consistently outperforms the baseline with substantial margin.

ImageNet pretrained v.s. our synthetically pretrained.

TAR@FAR curve of models pretrained with ImageNet dataset and our synthesized samples. Hover (desktop) or click (mobile) to see numbers.

Compared to ImageNet pretraining, our synthetically pretrained models generalize better when finetuning on real palmprint recognition datasets.


If our methods are helpful to your research, please kindly consider to cite:
    author    = {Zhao, Kai and Shen, Lei and Zhang, Yingyi and Zhou Chuhan and Wang, Tao and Ruixin Zhang and Ding Shouhong and Jia, Wei and Shen, Wei},
    title     = {B\'{e}zier{P}alm: A Free lunch for Palmprint Recognition},
    booktitle = {European Conference on Computer Vision (ECCV)},
    month     = {Oct},
    year      = {2022},
    title={Distribution Alignment for Cross-device Palmprint Recognition},
    author={Shen, Lei and Zhang, Yingyi and Zhao, Kai and Zhang, Ruixin and Shen, Wei},


We would like to acknowledge Haitao Wang and Huikai Shao for their assistance in processing experimental results. This work is partly supported by grants of the NSFC 62076086, 62176159 and Shanghai Municipal Science and Technology Major Project 2021SHZDZX0102.