Face Image Quality Assessment Toolkit (fiqat)

The intended purpose of this toolkit is to facilitate face image quality assessment or face recognition experiments with a simple API. While e.g. the face detection and image preprocessing parts could be used to help with the training of new models, this toolkit doesn’t provide any training-specific utilities.

Toolkit features include:

The setup isn’t completely automatic, but the default setup without download times should take less than ten minutes.

Download and default configuration file setup

  1. Download this repository.

  2. Download and extract the external dependency package from https://cloud.h-da.de/s/a7pHZqBHptHaHRc (password 7W4Ei1FjhlV), which mainly consists of the model files for various included methods, see the External dependency package locations section. You could omit these dependencies if you don’t want to use these methods.

  3. In the repository, create a copy of fiqat_example.toml at local/fiqat.toml. If you want to store this config file at another location, see fiqat.config.load_config_data().

  4. Edit the local/fiqat.toml by changing the models = "/path/to/fiqat/dependencies/" path to your local path for the external method file dependency directory.

Python setup using an Anaconda environment

First create a new Anaconda environment. The following example names it fiqae for “Face Image Quality Assessment Environment”:

conda create -n fiqae python=3.9
conda activate fiqae

Then install the default Python requirements used by the toolkit’s included methods (run this in the fiqat repository directory):

pip install -r requirements.txt

Note: The default requirements.txt will install dependencies for CPU method execution. If you want to run certain methods on a GPU, you may have to modify the package installation. You could also omit certain dependencies (e.g. the insightface and onnxruntime packages) if you don’t need the corresponding methods. The requirements.txt contains dependencies for some of the toolkit’s examples as well. That’s why these dependencies currently are specified by this separate requirements.txt file, instead of all dependencies being specified as part of the fiqat package setup (in pyproject.toml).

Finally install fiqat as an “editable” mode package in the new environment (run this in the fiqat repository directory):

pip install --editable .

The fiqat package should now be available within the fiqae environment:

import fiqat

# ...

Special fd/retinaface setup step

One additional setup step is required if you want to use the fd/retinaface face detector method: Run make in insightface-f89ecaaa54/detection/RetinaFace of the external dependency package, using your fiqae Python environment.

Documentation

You can build the documentation by running make html in docs/, using your previously created Python environment, which should create HTML documentation files in docs/_build/html/. This uses the Sphinx Python package, which is specified in the requirements.txt.

Examples

These standalone example scripts can be found in the repository’s examples directory:

  • load_all_methods.py: This example tries to initialize all methods included in the toolkit. If you successfully installed all dependencies, then all methods should be shown as available.

  • check_methods.py: This example will run all available face detector, face image quality assessment, and face recognition feature extractor methods. It needs an example image as input for the test runs. With the default configuration, the output of each method will be tested for consistency across multiple runs.

  • face_detection.py: This example will run the fd/scrfd face detector for a given input image, and save a new output image with drawn face detector information (such as the facial landmarks).

  • create_edc_plot.py: This example goes through all the steps necessary to create a simple “Error versus Discard Characteristic” (EDC) plot that compares mutliple face image quality assessment algorithms with respect to face recognition performance. I.e. this example uses all method types of the fiqat.main_api (but not every included method).

Included methods

The toolkit currently includes these methods:

  • fiqat.types.MethodType.FACE_DETECTOR:

  • fiqat.types.MethodType.PRIMARY_FACE_ESTIMATOR:

    • pfe/sccpfe: “Size- and Center- and Confidence-based Primary Face Estimation”, which selects the primary face based on the size of the face ROI (fiqat.types.DetectedFace.roi), the position of the ROI relative to the image center, and based on the face detector’s confidence values.

      • Configuration option use_roi: bool: If True, the roi data of the fiqat.types.FaceDetectorOutput.detected_faces will be used for the estimation. The first roi estimation score factor for each candidate face is the minimum of the ROI’s width and height. The second factor is only computed if input_image_size information is available in the fiqat.FaceDetectorOutput, and favors ROIs that are closer to the image center. This second factor is meant to help with cases where multiple face ROIs with similar sizes and confidence values are detected. True by default.

      • Configuration option use_landmarks: bool: If True, the bounding box for the landmarks of the fiqat.types.FaceDetectorOutput.detected_faces will be used as ROI information, with score factor computation as described for use_roi. If both use_roi and use_landmarks is True, then roi data will be used whenever available, and landmarks-based ROIs are used as fallback. True by default.

      • Configuration option use_confidence: bool: If True, the stored confidence values are used as an estimation score factor, normalized relative to the maximum value among the fiqat.types.FaceDetectorOutput.detected_faces. If either use_roi or use_landmarks is True as well, all factors are combined by multiplication. True by default.

  • fiqat.types.MethodType.PREPROCESSOR:

    • prep/crop: Simple preprocessing method that crops the image to the fiqat.types.DetectedFace.roi, then resizes the cropped region to the output size (if specified).

      • Configuration option image_size: Optional[ImageSize]: The size of the output image. If this is None, the cropped region will not be resized. None by default.

    • prep/simt: “Similarity transformation” face image preprocessing/alignment. It crops and aligns the facial image to five facial landmarks, two for the eyes, one of the tip of the nose, and two for the mouth corners, as produced e.g. by fd/retinaface. This approach has been used e.g. in “ArcFace: Additive Angular Margin Loss for Deep Face Recognition”, “CosFace: Large Margin Cosine Loss for Deep Face Recognition”, “SphereFace: Deep Hypersphere Embedding for Face Recognition”.

      • Configuration option image_size: Optional[ImageSize]: The size of the output image. If this is None, the fiqat.types.DetectedFace.roi width/height will be used. None by default.

  • fiqat.types.MethodType.FACE_IMAGE_QUALITY_ASSESSMENT_ALGORITHM:

    • fiqa/crfiqa: CR-FIQA from https://github.com/fdbtrs/CR-FIQA using the “CR-FIQA(S)” or “CR-FIQA(L)” model.

      • Configuration option device_config: DeviceConfig: Described below.

      • Configuration option batch_size: int: Described below.

      • Configuration option model_type: str: Specifies the model that should be used, which must be either 'CR-FIQA(S)' or 'CR-FIQA(L)'. This must be set explicitly.

      • The model image input size is 112x112. Images are resized via cv2.resize(image, (112, 112), interpolation=cv2.INTER_LINEAR).

    • fiqa/faceqnet: FaceQnet from https://github.com/javier-hernandezo/FaceQnet using the “FaceQnet v0” or “FaceQnet v1” model.

      • Configuration option device_config: DeviceConfig: Described below.

      • Configuration option batch_size: int: Described below.

      • Configuration option model_type: str: Specifies the model that should be used, which must be either 'FaceQnet-v0' or 'FaceQnet-v1'. This must be set explicitly.

      • The model image input size is 224x224. Images are resized via cv2.resize(image, (224, 224), interpolation=cv2.INTER_LINEAR).

    • fiqa/magface: MagFace for quality assessment from https://github.com/IrvingMeng/MagFace using the iResNet100-MS1MV2 model (283MB magface_epoch_00025.pth with sha256sum cfeba792dada6f1f30d1e118aff077d493dd95dd76c77c30f57f90fd0164ad58).

      • Configuration option device_config: DeviceConfig: Described below.

      • Configuration option batch_size: int: Described below.

      • Configuration option return_features_and_quality_score: bool: If True, the method will output dictionaries with a ‘features’ (fiqat.types.FeatureVector) and a quality_score entry each, instead of only returning a fiqat.types.QualityScore per input. This is an “experimental” option, proper MagFace face recognition method support will be added to the toolkit in a future version. False by default.

      • The model image input size is 112x112. Images are resized via cv2.resize(image, (112, 112), interpolation=cv2.INTER_LINEAR).

  • fiqat.types.MethodType.FACE_RECOGNITION_FEATURE_EXTRACTOR:

  • fiqat.types.MethodType.COMPARISON_SCORE_COMPUTATION:

    • csc/arcface: Computes fiqat.types.SimilarityScore output for features computed by fr/arcface (i.e. the cosine score in the range [-1, +1]).

Configuration options can be passed as keyword arguments to the fiqat.main_api() functions.

Common options are:

  • resize_to: Optional[fiqat.types.ImageSize], for fiqat.types.MethodType.FACE_DETECTOR methods: If set, the input images are resized to this size using cv2.resize(..., interpolation=cv2.INTER_LINEAR), prior to detection.

  • device_config: fiqat.types.DeviceConfig: The method support both CPU and GPU execution. Note that you may need to install Python packages that differ from the requirements.txt to enable GPU support. DeviceConfig('cpu', 0) by default.

  • batch_size: int: The method supports processing input in batches. Larger batch sizes may accelerate processing, but may also require more memory (especially for GPU execution). 1 by default.

License information

This “Face Image Quality Assessment Toolkit (fiqat)” itself is released under the MIT License (see the LICENSE file).

Please note that many of the included methods are based on external projects with their own licenses:

Directly used Python packages that are available on https://pypi.org/:

Known issues and limitations

  • The toolkit is currently patching a few deprecated numpy Python type aliases, in fiqat.patch, due to the dependencies of some of the included methods. But this probably should not be an issue for any experiment code.

  • As noted above, the default requirements.txt only installs dependencies to support CPU execution of the included methods. The dependencies need to be manually adjusted if you want to run methods with GPU support on a GPU.

  • Included methods may print internal information as they run.

  • The documentation currently does not provide recommendations on which methods may be preferable.

External dependency package locations

For the included methods, the relevant files are located as follows within the external dependency package:

  • fd/dlib: dlib/shape_predictor_68_face_landmarks.dat

  • fd/mtcnn: insightface-60bb5829b1/deploy

  • fd/retinaface: insightface-f89ecaaa54/detection/RetinaFace & insightface-f89ecaaa54/models/retinaface-R50

  • fd/scrfd: insightface/models/buffalo_l

  • fiqa/crfiqa: crfiqa

  • fiqa/faceqnet: faceqnet

  • fiqa/magface: MagFace

  • fr/arcface: insightface-60bb5829b1/models