Understanding Liveness Detection

For most acquaintances and events, telling who you are suffices to be taken as your identity. But for the remaining situations, you’ve to prove your identity. This proof is for our own good so that no one else can assume your identity.

Assuming an identity has always been difficult and it requires a series of preparations in order to misguide someone. Lately however, with changing times, it has in some cases become as simple as having someone’s picture to assume their identity.

Even if you know who the person is, do you know if the person is actually present?

It’s in times like these, when technology has to be more resilient and robust to overcome possible frauds, and at the same time it cannot be any more tedious for genuine people.

What is Liveness Detection?

In simple words, Liveness Detection is the ability to detect whether the face (or any other biometric) is real, from a live person at the point of capture or fake, from a spoof artifact or a lifeless body part.

Before the current times where everything is done virtually, liveness detection was done by humans being actually present to verify identification. Even in remote onboarding, it can be done so with the user connected via a video call. However this is an extremely time consuming process prone to errors. That is when remote onboarding turned to being completely virtual.

While this was convenient for the service providers onboarding the customers, it opened them up to new liabilities in terms of identification fraud. Anyone could pose as someone they’re not by merely uploading their photo and details.

This led to the urgent need to develop technology to ascertain that such frauds are mitigated. AI was the most obvious choice to be leveraged to distinguish between real and fake images or videos in order to avoid such presentation attacks.

There are two kinds of liveness detection based on when the detection happens

  • Active - It happens at the time of capture when the user is challenged to perform a particular task and the behaviour at that time is used to determine the liveness.
  • Passive - It happens after the point of capture passively. Usually includes use of only one image to analyse and determine liveness.

Active v/s Passive Liveness Detection

  • Active detection is often easier to develop since it involves more prominent and obvious tasks to be performed by the user. For instance, if the user has blinked or if the user can move their face in a particular fashion.
  • Active detection may require more infrastructural support and may cause a time delay as it has to be done in real time on the user end
  • Active detection may also dampen the user experience when the user is asked to comply with a series of instructions for capturing a photo or a video.
Active Liveness Detection detects user's behaviour and task performance
  • Passive detection on the other hand, happens while the user is unaware of being tested.
  • With passive detection, there are no additional efforts required from the user end
  • Passive detection will not have a large overhead on the resources as well as won’t require any changes on the user or application level.
Passive Liveness Detection works on the image at time of capture

How does Arya’s Liveness Detection work?

Active Liveness Detection

In order to completely understand the complexity and nuances of liveness detection, we started simple.

Because active liveness detection takes into account a continuous input stream  from the user's camera, we were able to detect various movements and also assess the user's ability to follow instructions.

We ask the user to place their face in a predefined area highlighted on the camera frame. Using the movements made to adjust the face, we detect actions such as blink of an eye or parting of the lips and opening of the mouth.‌

Based on the number of actions detected from the liveness checklist, a score can be given to each user determining the degree of liveness detected.

Passive Liveness Detection

Given the obvious advantages of passive detection over active detection, we then focused our research on the former. Deep Neural Networks being our niche area of expertise, we developed a custom model for the purpose of Passive Liveness Detection.

The major challenge in developing this use case was to make the model robust enough to identify all possible types of fake images as well as spoof including use of 2D/3D mask, or replay attack wherein a video/image is replayed on the camera at the time of capture.

Passive Liveness Detection

We’ve successfully achieved an accuracy of 99% using our model not just on the test data but also when our team members voluntarily tested the model of their own accord.‌

Where can Arya’s Passive Face Liveness Detection be used?

Any enterprise or organization can incorporate our model for various processes such as Customer Onboarding, Digital KYC. Being a passive detection module, there are no changes required on the Application front. All that needs to be done is to provide the API endpoint with the image captured during the onboarding process and voila! Our module will detect if it's a live image or a spoof and provide a confidence score for its prediction.

Our API endpoint is already tested and ready for integration on our platform - Arya APIs. You can avail free trial for testing any data of your choice directly on the platform or even via API. Explore more now on https://api.arya.ai

Happy Pinging!


Mansi Shah

Mansi Shah

Sr. Research Scientist | UCLA Graduate | Keeping up with the age of AI
Deepak Labh

Deepak Labh

Sr. Research Scientist