Face-verification is a procedure of verifying the identity of a person claiming to be of certain identity by comparing it to the enroled faceprint of that identity.
During this procedure, you will send both the enroled faceprint (obtained during the procedure of
face-enrolment or one of KYC procedures with implicit enrolment), and current selfie of a person attempting to identify,
together with some options.
As a result, you will receive a response with the similarity score (how much the two faceprints are similar), and a liveness score.
Similarity score is expressed in two manners: As a "distance" from the enroled faceprint (the lower the number, the more similar the two
faceprints are), and as a "confidence score" in form of percentage (the higher the number, the more similar the two faceprints are).
MachineSense customer/partner will initially send a selfie image (or images) from their
website or mobile app to their own servers (this step is completely independent of MachineSense), and than call the MachineSense
API, including that image and some parameters.
In your call to MachineSense API, you will include both enroled vector and the image of the person (end-user) attempting to identify.
Note: It is important that image sent to verification is in the same format (jpeg, png...) in which this person was enrolled. Thus, if person was enrolled with a jpeg image, verification image should also be in jpeg. This is because images in jpeg are 3-channel images, and png has 4 channels, so comparisons will not be as accurate as if those were supplied in the same format.
In order to help customers start quickly with such client-side (web-based) implementation, MachineSense offers a set of
examples and code, ready for copy/paste into your applications and be customized/modified. Basic operations with capuring the
image, setting up parameters, etc. will be already present in those examples.
Examples are written in vanilla JavaScript, and can be used in any web-based application.
You can find them on our Demo page as well as our
GitHub repository.
Customer creates own client-side page or app, including capturing of the user's selfie image.
Exception to this might be if customer is using MachineSense whitelabel client-side, in which case this is already done for them, or MachineSense
WASM component. The latter process, however is a two-step process and is related to pre-built / ready-to-use modules.
More details about single-step and two-step processes.
Mind it that you can use selfies obtained during the process of verification to re-enrol the user, if you wish. This is a common
practice, and it's called "implicit (or periodical) enrolment". Same image (last selfie) would be sent also to enrolment API, and
results stored alongside or overwriting previous enrolment(s).
Re-enrolment might be used periodically (say - once a year or once in three years), simply to have the latest facial information on
your user. When verifying, you can always send either that last enroled vector to be checked with the curerent selfie, or even
all enroled vectors (re-enroled) to be checked with the current selfie, and than compare the results.
(Call from your server to our API.)
POST /faceapi/v1/verify
Parameters / body:
{ "images": [ "string" ], "api_key": "string", "ref": "string", "vector": [ 0 ], "liveness": boolean, "extra_images": [ "string" ] }
Parameters explained:
Code: 200
Default response:
{ "result": "Ok", "code": 0, "message": "string", "data": { "ref": "string", "liveness_score": 1, "liveness_result": "string", "match_score": 1, "confidence_score": 100 } }
Response explained: