← Agent catalogue

AI Agent Profile · LendingIQ · Bengaluru

Fraud Detection Agent AI

Function: Onboarding Fraud DetectionRuntime: AWS Bedrock · ap-south-1Model: Claude Sonnet 4Context window: 200K tokens

DivisionLending Operations

Resume

What this agent does

The Fraud Detection Agent AI (Onboarding) analyses the biometric and device signals captured during the onboarding selfie and liveness step — detecting spoofing attacks, assessing the face match between the selfie and the submitted ID document, flagging synthetic identity indicators in the applicant's demographic profile, and identifying device-level anomalies that suggest fraudulent onboarding behaviour. It produces a structured fraud signal report that is combined with the Fraud Risk Agent AI's application-data signals before the underwriting decision. Every flag requires human fraud analyst review — the agent detects, the human determines.

Primary functions

Liveness Detection

Every applicant — selfie capture step

Invoked when: applicant completes the selfie/liveness capture in the onboarding journey and the SDK returns liveness signals for interpretation

  • Reads the liveness signals from the capture SDK — active liveness challenge results (blink, turn head, smile — actions that a printed photograph or replayed video cannot perform), passive liveness score (texture and depth analysis of the captured frame that distinguishes a live face from a flat image), and anti-spoofing signals (screen replay detection, printed mask detection, 3D mask detection where the SDK supports it).
  • Interprets the combination of active and passive signals: a high passive liveness score with a failed active challenge is more likely a technical failure (poor lighting, connectivity issue) than a spoof attack; a low passive score with a borderline active challenge is more likely a presentation attack. The interpretation uses the signal combination, not any single signal in isolation.
  • Classifies the liveness outcome on a three-tier scale: Pass (both active and passive above configured thresholds — proceed), Marginal (one signal borderline — refer to VKYC for human-moderated confirmation), and Fail (spoof attack indicators present — flag for human fraud analyst review and block onboarding until resolved). The thresholds are configured by the fraud team and applied by the agent — the agent does not set the thresholds.
  • Cannot detect highly sophisticated deepfake injection attacks that operate at the camera API level — where a fraudster injects a pre-recorded deepfake video directly into the camera stream rather than presenting a physical artifact to the camera. These attacks bypass standard active liveness challenges. The agent flags any signal anomalies that may indicate injection (frame rate inconsistency, metadata mismatch, SDK integrity signals) but cannot definitively detect all injection methodologies.
Output: Liveness signal report — active liveness result per challenge, passive liveness score, anti-spoofing signals detected, combined liveness verdict (Pass / Marginal / Fail), and a technical notes field for any environmental factors (poor lighting, connectivity) that may explain a marginal result.

Face Match

Every applicant — selfie vs ID document

Invoked when: liveness capture is complete and the selfie is available for comparison against the ID document photograph

  • Compares the live selfie captured during onboarding against the photograph on the submitted ID document (Aadhaar card, PAN card, or passport) using the face match engine — producing a similarity score and a match confidence classification. The match confidence is not a binary pass/fail but a confidence band: High Match, Probable Match, Possible Match, Low Match, No Match.
  • Applies different similarity thresholds for different risk contexts: a selfie matched against an Aadhaar card (which uses a compressed JPEG and may be decades old) accepts a lower similarity threshold than a selfie matched against a recent passport photograph. The threshold configuration accounts for the expected quality variation in different ID document types.
  • Flags specific face match anomalies beyond the similarity score: a photograph on the ID document that shows a significantly different age from the selfie (suggesting a borrowed ID card), a low-resolution or clearly scanned photograph on the document that prevents reliable matching, or a selfie taken in lighting conditions that significantly degrade matching accuracy (for example, strong backlighting). These are technical quality flags, not fraud determinations.
  • Does not match the selfie against a database of known individuals or a watch-list. Face matching is one-to-one (selfie to submitted document) and not one-to-many (selfie to database). One-to-many facial recognition is not performed in the onboarding flow.
Output: Face match report — similarity score, match confidence classification, threshold applied and why, document quality assessment, age consistency observation, and any technical quality flags that affected match reliability. Explicitly notes whether the result is a liveness-to-document match (high confidence) or a document-only match (lower confidence).

Synthetic Identity Flags

Every applicant — demographic and behavioural analysis

Invoked when: applicant profile data is available for analysis alongside the biometric signals

  • Analyses the applicant's identity profile for synthetic identity indicators — patterns where a real PAN or Aadhaar number has been combined with fabricated or inconsistent demographic attributes. Common patterns: a very recent Aadhaar number combined with a stated age suggesting the Aadhaar was obtained recently for an older individual; a PAN creation date significantly after the date of birth (genuine PANs are typically created when an individual first needs them, usually in their 20s for employment); or a mobile number registered far more recently than the stated age of the borrower.
  • Cross-checks the demographic consistency of the submitted profile: date of birth on Aadhaar vs PAN vs stated age on the application, city of residence vs language of the Aadhaar (regional language on the card should be broadly consistent with the stated state of residence), and employer location vs stated home address (a Mumbai employer combined with a rural address in a different state is not inherently suspicious but is a consistency signal worth noting).
  • Matches the device fingerprint and session metadata against the synthetic identity pattern corpus — known synthetic identity operations often use the same device, IP subnet, or session timing pattern across multiple fraudulent applications, creating a network signal that individual application analysis cannot see. Flags are produced where current session metadata matches known synthetic identity ring patterns.
Output: Synthetic identity signal report — each indicator identified with the specific data fields that triggered it, severity classification per indicator, pattern corpus match status, and a composite synthetic identity risk score (Low / Elevated / High) based on the combination and severity of indicators.

Device Fingerprint Analysis

Every applicant — session and device signals

Invoked when: applicant initiates onboarding session and device/session metadata is available from the digital journey platform

  • Reads device and session signals from the onboarding platform: device ID, device model, operating system version, IP address and geolocation, VPN / proxy detection, emulator detection flag, rooted/jailbroken device flag, and session behaviour metadata (time-to-complete each step, interaction pattern, number of retries).
  • Checks the current device fingerprint against the device fingerprint store — a persistent record of devices used in prior onboarding sessions, flagged for fraudulent activity. A device fingerprint match to a device used in a prior declined or fraud-confirmed application is an immediate Red flag regardless of how clean the current applicant's profile appears.
  • Flags device-level anomalies associated with fraudulent onboarding: emulator environments (where a fraudster is running a virtual device to conceal their real device identity), VPN masking of geolocation (hiding the actual application location), rooted devices that may allow bypassing of SDK security controls, and abnormal session behaviour patterns (completing the entire onboarding in under 30 seconds, or using precision scripted interaction patterns that no human would produce naturally).
Output: Device fingerprint report — device signals summary, known fraud device match status, anomaly flags with specific signal that triggered each flag, and a device risk classification (Clean / Anomalous / Known Fraud) that feeds directly into the composite onboarding fraud risk score.

Hard guardrails

Will notDecline an application autonomously on a fraud signal. All Red-rated signals trigger a mandatory human fraud analyst review. The underwriting pipeline is paused — not automatically rejected — pending human review and determination.
Will notPerform one-to-many facial recognition against any person database, watch-list, or law enforcement database. Face matching is one-to-one: live selfie to submitted ID document only.
Will notStore raw biometric data (selfie images, liveness video frames) beyond the DPDP-compliant retention period for the KYC record. Biometric signal scores (liveness score, match score) are retained; the underlying biometric data is not stored for model training or any secondary purpose.
Will notMake a definitive fraud determination. It produces signals — "this session shows characteristics consistent with a spoof attack" — not conclusions. The human fraud analyst determines whether the signals constitute fraud.

Known limitations

Deepfake injection attacks at the camera API level are the most significant capability gap. Sophisticated adversaries can inject pre-recorded deepfake video directly into the camera stream, bypassing active liveness challenges entirely. The agent detects environmental anomalies associated with injection (frame rate inconsistencies, SDK integrity signals) but cannot reliably detect all injection techniques, particularly as the technology evolves rapidly.Require the liveness SDK vendor to maintain active countermeasures against injection attacks and provide version updates as new injection techniques are detected. Include injection attack countermeasures as a mandatory SLA requirement in all VKYC and liveness vendor contracts, with a defined incident response timeline when new attack vectors are identified.
Face match accuracy degrades with poor-quality ID document photographs. Aadhaar cards printed with low-resolution photographs, cards that are worn, laminated with glare, or photographed at angle will produce low face match scores regardless of whether the applicant is genuine. These false positives send genuine applicants to VKYC unnecessarily, increasing abandonment at the VKYC referral step.Implement a document image quality pre-check before running face matching — if the ID document photograph does not meet minimum quality standards for reliable matching, route directly to VKYC rather than running a match that will produce an unreliable result. A known-unreliable result is worse than acknowledging the limitation upfront.
Device fingerprint matching requires the device fingerprint store to be current and consistent. If the fraud team does not consistently log confirmed fraud device IDs into the store after each confirmed case, the historical coverage degrades over time. A known fraud device that was never logged will pass a device match check as clean.Build confirmed fraud case closure into the device fingerprint store update process — every confirmed onboarding fraud case must result in the device fingerprint(s) associated with it being logged in the store before the case is closed. This is a process discipline requirement, not a technical one.
Agent Profile · Fraud Detection Agent AI (Onboarding) · LendingIQ · BengaluruLast updated April 2026 · For internal use

Important Reads

Learn more about how to deploy Fraud Detection Agent AI to your lending workflow.