Skip to main content

On-Premises Liveness Detection

This document provides a comprehensive guide to deploying the on-premises liveness detection solution, enabling users to perform liveness verification within their own infrastructure. Unlike cloud-based solutions, this on-prem deployment offers enhanced security, data privacy, and complete control over the environment.

The following sections outline the step-by-step process for installation, configuration, and integration to ensure a smooth setup. Additionally, this guide includes detailed instructions for setting up and running the FACIA application using Docker containers. Please follow the outlined guidelines carefully for successful deployment.

Prerequisites

Below are the essential prerequisites to ensure the seamless operation of the Facia services:

RequirementSpecification
OSUbuntu 20.04
CPU128 cores
RAM250GB
Disk1TB
ServerFreshly installed and upgraded
DockerEnsure that the latest version of Docker is installed.
DockerEnsure your docker network or facia docker containers have internet access.

Setting up the Updated Docker Images From Docker Hub

Pulling the updated images from Docker Hub:

There are a total of two images that you need to pull from the docker hub.
Following are the commands to pull the docker images.

docker pull faciaai/cache:latest
docker pull faciaai/liveness-detection:latest

Run the Containers:

  1. To run the FACIA application on a different IP address and port, modify the -p flag in the docker run command for ml_services_container. (Optional)
  2. Run the containers with the following commands:
docker run -d --name mongodb_local_container -e USER=mongoAdmin -e PASS=TBbuaxROrspF8K6ugQJ29s8ZMqc --network=bridge faciaai/cache:latest
docker run -d --name ml_services_container_updated_1 --network=bridge -p 127.0.0.1:5001:5001 --link mongodb_local_container:mongodb-local  faciaai/liveness-detection:latest

Wait for Initialization:

Allow 5-10 minutes for the services to initialize before proceeding. Check the status of the ml container using the bash command.

curl localhost:5001/status_check

if the response is returned as Service is live Then we are good to go with start putting the requests OR else you need to wait till its ready.

Note

Images must be in PNG, JPG, or JPEG format.

Please Sign Up to get the hash_id, so you can use the docker and can use it. Once logged in, you can retrieve the Hash ID from this URL in settings.

User Authentication and Processing

Endpoint Details

  • Method: POST
  • URL: /liveness
  • Content-Type: application/json
  • Request Body:
json_data = {
"hash_id": "your_hash_id",
"selfie_image": "base64_encoded_selfie_image"
}
  • Server Response:
Response Code: 201
{
"liveness_result": {
"is_live": 0/1,
"liveness_score": 0.0-1.0
},
"message": "Success"
}

Interpretation of Response

  • is_live:
    • 1 if the image is not a spoof
    • 0 if it is a spoof attack
  • liveness_score: A score representing how confident the system is that the image is not a spoof.
  • message: Indicates the success of the process.

Use Cases

1. Wrong hash_id

Response

{
"message": "Invalid credentials"
}

2. Corrupted Image

Response

{
"error": "Invalid or corrupt selfie image. Images must be in PNG, JPG, or JPEG format."
}

3. Original/Bonafide Image

Response

{
"liveness_result": {
"is_live": 1,
"liveness_score": 0.9862353
},
"message": "Success"
}

4. Spoofed Attack

Response

Response Code: 201 
{
"liveness_result": {
"is_live": 0,
"liveness_score": 0.16925619588358065
},
"message": "Success"
}

5. Missing Image

Response

{
"error": "Selfie_image is required and must be a JPEG, JPG or PNG."
}

6. Demo Limit Reached

Response

{
"message": "Your request limit has been reached."
}

Testing Script in Python:

Note

You need to add the value of the variable within the key “image_path” as per your business needs.

import requests
import base64
import mimetypes
import json
image1_path = "selfi.png"
hash_id="hash_id"
def encode_image_with_mime(image_path):
"""
Reads an image file, encodes it in base64 with a validation string, and includes the mime type.
"""
mime_type, _ = mimetypes.guess_type(image_path) # Get MIME type based on the file extension
if not mime_type:
raise ValueError(f"Could not determine MIME type for {image_path}")

with open(image_path, "rb") as img_file:
base64_string = f"data:{mime_type};base64,{base64.b64encode(img_file.read()).decode('utf-8')}"
return base64_string

# Encode selfie image with validation string
selfie_image_base64 = encode_image_with_mime(image1_path)

# Prepare JSON data with email, password, and encoded image
json_data = {
'hash_id': hash_id,
'selfie_image': selfie_image_base64,
}
r = requests.post("http://127.0.0.1:5001/liveness", json=json_data)
print(f"Response Code: {r.status_code}")
print(json.loads(r.text))

Summary

This report provides a detailed analysis of the performance testing conducted on the system, focusing on response times, success rates, failure rates, and other key performance indicators (KPIs).

Server Specification

ComponentSpecification
Model2x Intel(R) Xeon(R) Gold 6438Y+
CPU128 VCPUs
RAM256 GB
Drives1 × 2 TB SSD

Test Overview

MetricValue
Total Requests Made501
Total Success501
Total Failures0
Success Rate100.00%
Error Rate0.00%
Total Data Received384 kB
Total Data Sent88 MB
Total Duration~5m7s
Requests per Minute~98-100 req/min
Average Requests per Second1.63 req/sec

Request Duration Metrics

MetricValue
Average Duration7.58s
Minimum Duration3.65s
Median Duration7.78s
Maximum Duration14.47s
90th Percentile8.94s
95th Percentile9.62s

The system handled 501 requests in 5 minutes, with an average request duration of 7.58s. Since requests were processed concurrently, the system achieved ~100 requests per minute.

Request Breakdown

MetricAvgMinMedianMax
Request Blocked3.35ms0s1µs567.28ms
Connecting967.82µs0s0s27.84ms
Request Receiving173.35µs45µs78µs25.89ms
Request Sending40.44ms23.6ms29.71ms118.46ms
TLS Handshaking1.37ms0s0s40.35ms
Request Waiting7.54s3.58s7.73s14.44s

Conclusion

The test results indicate that the system handled all requests successfully.