API: Face verification

Introduction
Call
Response


Introduction

Face-verification is a procedure of verifying the identity of a person claiming to be of certain identity by comparing it to the enroled faceprint of that identity.

During this procedure, you will send both the enroled faceprint (obtained during the procedure of face-enrolment or one of KYC procedures with implicit enrolment), and current selfie of a person attempting to identify, together with some options.
As a result, you will receive a response with the similarity score (how much the two faceprints are similar), and a liveness score.
Similarity score is expressed in two manners: As a "distance" from the enroled faceprint (the lower the number, the more similar the two faceprints are), and as a "confidence score" in form of percentage (the higher the number, the more similar the two faceprints are).

MachineSense customer/partner will initially send a selfie image (or images) from their website or mobile app to their own servers (this step is completely independent of MachineSense), and than call the MachineSense API, including that image and some parameters.
In your call to MachineSense API, you will include both enroled vector and the image of the person (end-user) attempting to identify.

Note: It is important that image sent to verification is in the same format (jpeg, png...) in which this person was enrolled. Thus, if person was enrolled with a jpeg image, verification image should also be in jpeg. This is because images in jpeg are 3-channel images, and png has 4 channels, so comparisons will not be as accurate as if those were supplied in the same format.

In order to help customers start quickly with such client-side (web-based) implementation, MachineSense offers a set of examples and code, ready for copy/paste into your applications and be customized/modified. Basic operations with capuring the image, setting up parameters, etc. will be already present in those examples.
Examples are written in vanilla JavaScript, and can be used in any web-based application.
You can find them on our Demo page as well as our GitHub repository.

Customer creates own client-side page or app, including capturing of the user's selfie image.
Exception to this might be if customer is using MachineSense whitelabel client-side, in which case this is already done for them, or MachineSense WASM component. The latter process, however is a two-step process and is related to pre-built / ready-to-use modules.

More details about single-step and two-step processes.

Mind it that you can use selfies obtained during the process of verification to re-enrol the user, if you wish. This is a common practice, and it's called "implicit (or periodical) enrolment". Same image (last selfie) would be sent also to enrolment API, and results stored alongside or overwriting previous enrolment(s).
Re-enrolment might be used periodically (say - once a year or once in three years), simply to have the latest facial information on your user. When verifying, you can always send either that last enroled vector to be checked with the curerent selfie, or even all enroled vectors (re-enroled) to be checked with the current selfie, and than compare the results.

Call

(Call from your server to our API.)

POST /faceapi/v1/verify

Parameters / body:

            {
                "images": [
                  "string"
                ],
                "api_key": "string",
                "ref": "string",
                "vector": [
                  0
                ],
                "liveness": boolean,
                "extra_images": [
                  "string"
                ]
                
            }

Parameters explained:

  • "images" = (mandatory) Array of images (selfies) encoded as b64 (hence strings). Most commonly, a single image is sent.
  • "api_key" = (mandatory) Your developer key found in your Settings
  • "ref" = (optional, default="") Any string you wish to send back to yourself, that you will receive with the later response to your webhook.
  • "vector" = (mandatory) Array of vectors saved previously during the enrolment process. Most commonly, a single (last) vector is sent.
  • "liveness" = (optional, default=false) Liveness detection enabled: true|false.
  • "extra_images" = (optional, default="") Extra array of images in case more images required for liveness detection.
  • "save_image_mode" = (mandatory) 0 = don't send back verification images with the response, 1 = send back verification images with response.

Response

Code: 200

Default response:

            {
                "result": "Ok",
                "code": 0,
                "message": "string",
                "data": {
                  "ref": "string",
                  "liveness_score": 1,
                  "liveness_result": "string",
                  "match_score": 1,
                  "confidence_score": 100
                } 
            }

Response explained:

  • "result" = "Ok" or "Err" (error)
  • "code" = 0 or error code (int)
  • "message" = If result "Err" - textual description (string)
  • "data" = JSON object with data
    • "ref" = Referential free-form string sent in either single-step- or two-step process (on session init).
    • "vector" = Array of vectors responding to each image sent on call/request (response to "images" array from the call).
    • "liveness_score" = Liveness detection score (int), can have values 0-100. Higher the number - better the score.
    • "liveness_result" = Liveness detection result (string), with descriptive values.
    • "match_score" = Distance from the enroled faceprint, value between 0 and 1. Smaller the "distance", better the match.
    • "confidence_score" = Confidence score, value between 0 and 100. Higher the number, better the match. Theis metric (confidence score) is secondary and is used more as human-readable comparison results.

Top of the Page