Use case #0001

Liveness detection vs face match: how Fraud AI uses both together

Face match and liveness detection are not the same check. A system that does only face match can be defeated with a printed photograph. A system that does only liveness detection can be passed by a live person holding someone else's identity documents. The Fraud Detection Agent AI runs both — simultaneously — because only the combination catches the full range of identity fraud attacks at onboarding.

Face match and liveness detection are not the same check. A system that does only face match can be defeated with a printed photograph. A system that does only liveness detection can be passed by a live person holding someone else's identity documents. The Fraud Detection Agent AI runs both — simultaneously — because only the combination catches the full range of identity fraud attacks at onboarding.

Why one check is always insufficient

The evolution of identity fraud at lending onboarding follows a predictable pattern: it attacks whichever check is weakest. When face match was the only biometric gate, fraudsters printed high-resolution photographs and held them up to cameras. When basic liveness detection was added — asking the user to blink or turn their head — fraudsters moved to deepfakes: AI-generated video of a real person's face performing the requested gestures. When deepfake detection improved, the attack shifted to synthetic identities that have no real face to detect against at all.

Each of these attack vectors requires a different defensive mechanism. A photograph attack is defeated by liveness detection. A real-person-wrong-documents attack is defeated by face match against the identity document. A deepfake attack requires passive liveness detection using texture analysis and micro-movement patterns. A synthetic identity attack requires signal analysis beyond biometrics entirely. The Fraud Detection Agent AI runs layered checks because the threat landscape is layered.

"A lender that asks a fraudster to blink is running an anti-photograph check, not an anti-fraud check. Liveness detection and face match answer different questions — and both questions must be answered."

What liveness detection actually checks — and what it does not

Liveness detection answers one question: is this a live human being, present at this camera, right now? It does not ask whether that human being is who they claim to be. The check uses three layers of evidence simultaneously.

Liveness Detection — what it checks
  • Passive texture analysis: Live skin has sub-surface micro-textures, pore patterns, and specular reflections that differ from printed photographs and rendered deepfakes. Analysed from a single frame — no user action required.
  • Micro-movement patterns: A live face exhibits imperceptible involuntary movements — micro-saccades, breathing-linked head movements, and blink patterns — that deepfake video renders inconsistently at the frame level.
  • Depth estimation: A flat photograph has no depth variation across the face plane. A live face has measurable depth difference between nose tip, cheek plane, and ear plane. Estimated from standard 2D camera without dedicated hardware.
  • Challenge-response (active): Where passive confidence is below threshold, a specific gesture challenge is issued — not a standard blink (which can be replayed) but a randomised multi-gesture sequence that cannot be pre-recorded.
  • Environmental consistency: Lighting direction, shadow angle, and background are consistent with a real video feed, not composited or screened.
Face Match — what it checks
  • Document face extraction: The identity document (Aadhaar, PAN, passport) is captured and the photograph extracted — handling variable document quality, print aging, and reflective laminates.
  • Facial geometry comparison: 128-point facial landmark mapping compared between the extracted document photo and the live capture frame. Match threshold calibrated to the identity document quality for that document type.
  • Ageing adjustment: Match algorithm applies estimated ageing adjustment based on document issue date — a 10-year-old Aadhaar photograph of a 22-year-old face will not match a 32-year-old face at the same threshold.
  • Cross-document consistency: Where multiple documents are presented (Aadhaar + PAN), the face is matched across all documents — a mismatch between document photographs (not just live vs document) is an independent fraud signal.
  • CKYC photo comparison: If a CKYC record exists, the stored photograph is retrieved and compared — detecting cases where a legitimate CKYC record is being used by a different person.

The combined score: what triggers each routing decision

L: 97
F: 94
Combined: 95
Auto-pass · Proceed to underwriting

Both liveness and face match above threshold — strong biometric confidence

Application proceeds automatically. Biometric check logged with scores and timestamp. No manual review. Accounts for 84.2% of all onboarding biometric checks in the current week.

L: 88
F: 72
Combined: 78
Step-up · Active challenge issued

Passive liveness confident, face match borderline — possible document quality issue

Active challenge-response issued: randomised multi-gesture sequence. If passed, application proceeds with enhanced monitoring flag. If failed, escalated to V-KYC video agent. Common cause: aged Aadhaar photo, low-light document capture.

L: 41
F: 89
Combined: 54
Deepfake alert · Manual review + device check

Face match strong but liveness low — probable deepfake or screen replay attack

High face match + low liveness is the deepfake signature: the face is recognisably correct (real person's photograph/video) but the liveness texture and micro-movement analysis detects synthetic generation. Application flagged. Device fingerprint check initiated. Fraud team alert.

L: 92
F: 28
Combined: 42
Identity mismatch · Application suspended

Real live person — wrong face — wrong identity documents

High liveness + low face match indicates a real person present with another person's documents — impersonation fraud. Application suspended immediately. Fraud team notified. AML check triggered. If the documents belong to a third party, that party's record is also flagged for monitoring.

L: 34
F: 31
Combined: 32
Both low · Fraud ring indicator · Escalate

Neither liveness nor face match passing — synthetic identity attempt or coordinated attack

Both scores low simultaneously indicates a systematic attack: likely a synthetic identity with no real biometric anchor, or a coordinated submission using poor-quality fraudulent documents. Application rejected. Device fingerprint and network graph analysis triggered. Pattern shared with fraud monitoring system across institution's entire pipeline.

The deepfake threat: why passive liveness detection is now necessary

Until 2022, the dominant biometric fraud attack in Indian digital lending was the photograph replay: a fraudster held a printed or screen-displayed photograph of the victim's face to the camera. Active challenge-response — asking the user to blink or turn their head — defeated this attack reliably because a photograph cannot move.

From 2023 onward, the attack vector shifted. Generative AI tools capable of producing face-swapped video — where any face can be placed onto a moving video with photorealistic quality — became widely accessible. Active challenge-response is no longer sufficient: a deepfake video can perform the requested gesture. Passive liveness detection that analyses the frame-level texture and micro-movement signatures of live versus generated video is now the necessary baseline, with active challenge-response as a secondary confirmatory layer where passive confidence is insufficient.

84.2%Auto-pass rate — both checks above threshold, no manual intervention
12.6%Step-up triggered — challenge issued or V-KYC required; most resolve and proceed
2.1%Deepfake / impersonation alerts — fraud team review + device check triggered
1.1%Both scores low — synthetic identity or coordinated attack pattern, application rejected

The combination is not redundancy — it is triangulation

Liveness detection and face match are not doing the same job twice. Liveness answers: is this a live human? Face match answers: is this the right human? Each check has a fraud pattern it catches and a fraud pattern it cannot catch alone. Running both simultaneously — and reading the pattern of both scores together, not just the combined average — is what allows the Fraud Detection Agent AI to distinguish a deepfake from a borderline face match, an impersonation attempt from a low-quality photograph, and a synthetic identity from a genuine application with a poor camera.

← Back to Fraud Detection Agent AI