Liveness Estimation and Mask Detection
In this tutorial, you'll learn how to detect faces on the image, estimate liveness, and detect a medical mask using Face Machine Server API.
Prerequisites
We assume that you’re familiar with at least one programming language and able to send HTTP requests, read JSON, and manipulate files. In this tutorial, we’re using Python, but the presented information can be applied to different programming languages as well.
Setting Up the System
- To complete the tutorial, you need an active account on https://facemachine.3divi.com. You can start a free trial or contact us at face@3divi.com to discuss other options.
- Python 3.6 or higher should also be installed on your machine.
- The Requests library is required to perform HTTP requests. You can install it using pip:
pip install requests
Preparing the Environment
First, prepare an image that you want to process. You need to know the absolute or relative path to this file to open it in your program.
Open your favorite text editor or IDE and create a new script tutorial_1.py
with the following content:
print('Face Machine Tutorial #1')
Open the console, go to the directory with the tutorial_1.py
file, and run python tutorial_1.py
. As a result, Face Machine Tutorial #1
should be printed to the console.
Reading the Image File
Read the image from the local file and convert it to the base64
string.
from base64 import b64encode
# Read an image from the file and convert it to base64 string
file_path = './masked_woman.jpg' # Specify the path to your image
with open(file_path, 'rb') as f:
image_bytes = f.read() # Read file data as a byte array
b64_image_bytes = b64encode(image_bytes) # Encode the byte array to base64
b64_image_string = b64_image_bytes.decode() # Convert a base64 buffer to a string
# b64_image_string = '/9j/4QAY...'\
Building a GraphQL Query
Follow the steps from the script below to build a GraphQL query.
from base64 import b64encode
import json
# ...
# Create a GraphQL mutation
# The b64_image_string variable will be substituted with a mutation string
# A brace needs to be escaped in a Python f-string with the second brace
mutation = f"""
mutation {{
createSamples(image: "{b64_image_string}") {{
samples {{
liveness {{value, confidence}}
mask {{value, confidence}}
}}
}}
}}
"""
# mutation {
# createSamples(image: "/9j/4QAY...") {
# samples {
# liveness {value, confidence}
# mask {value, confidence}
# }
# }
# }
# Create a GraphQL query string
query = {
"query": mutation
}
query_string = json.dumps(query)
# query_string = '{"query": "\n mutation {\n createSamples(image: \"/9j/4QAYRX...'
Sending the Request
Specify the API token and content type in HTTP headers and send the request. Your API token is displayed in your personal account (see the API section on a dashboard).
from base64 import b64encode
import requests
import json
# ...
# Specify the API token and request content type
headers = {
"Token": "<your_api_token>",
"Content-Type": "application/json"
}
# Send the request
response = requests.post('https://facemachine.3divi.com/api/v2/', headers=headers, data=query_string)
# Print the json response
print(json.dumps(response.json(), indent=2))
# {
# "data": {
# "createSamples": {
# "samples": [
# {
# "liveness": {
# "value": "REAL",
# "confidence": 0.9713881611824036
# },
# "mask": {
# "value": true,
# "confidence": 1.0
# }
# }
# ]
# }
# }
# }
Putting It All Together
Here you can see the full script source code:
from base64 import b64encode
import requests
import json
# Read an image from the file and convert it to a base64 string
file_path = './masked_woman.jpg' # Specify the path to your image
with open(file_path, 'rb') as f:
image_bytes = f.read() # Read file data as a byte array
b64_image_bytes = b64encode(image_bytes) # Encode the byte array to base64
b64_image_string = b64_image_bytes.decode() # Convert a base64 buffer to a string
# Create a GraphQL mutation
# The b64_image_string variable will be substituted with a mutation string
# A brace needs to be escaped in a Python f-string with the second brace
mutation = f"""
mutation {{
createSamples(image: "{b64_image_string}") {{
samples {{
liveness {{value, confidence}}
mask {{value, confidence}}
}}
}}
}}
"""
# Create a GraphQL query string
query = {
"query": mutation
}
query_string = json.dumps(query)
# Specify the API token and request content type
headers = {
"Token": "<your_api_token>",
"Content-Type": "application/json"
}
# Send the request
response = requests.post('https://facemachine.3divi.com/api/v2/', headers=headers, data=query_string)
# Print the json response
print(json.dumps(response.json(), indent=2))
Run the python tutorial_1.py
script in the console. The resulted JSON with the information about liveness and mask attributes should be printed to the console, for example:
{
"data": {
"createSamples": {
"samples": [
{
"liveness": {
"value": "REAL",
"confidence": 0.9713881611824036
},
"mask": {
"value": true,
"confidence": 1.0
}
}
]
}
}
}
Where:
liveness.value
– the result of liveness estimation. The possible values are:REAL
– the observed person belongs to a real person;FAKE
– the observed face is take from a photo.
liveness.confidence
– the confidence level that the prediction is correct. The value is a number from the range of [0, 1].mask.value
– the result of mask detection. The possible values aretrue
/false
.mask.confidence
– the confidence level that the prediction is correct. The value is a number from the range of [0, 1].
That's it! Now you know how to work with Face Machine API using Python. To learn more API methods, check out the Face Machine Server API reference.