AI Agent Profile · LendingIQ · Bengaluru
Fraud Detection Agent AI
DivisionLending Operations
Resume
What this agent does
The Fraud Detection Agent AI (Onboarding) analyses the biometric and device signals captured during the onboarding selfie and liveness step — detecting spoofing attacks, assessing the face match between the selfie and the submitted ID document, flagging synthetic identity indicators in the applicant's demographic profile, and identifying device-level anomalies that suggest fraudulent onboarding behaviour. It produces a structured fraud signal report that is combined with the Fraud Risk Agent AI's application-data signals before the underwriting decision. Every flag requires human fraud analyst review — the agent detects, the human determines.
Primary functions
Liveness Detection
Every applicant — selfie capture stepInvoked when: applicant completes the selfie/liveness capture in the onboarding journey and the SDK returns liveness signals for interpretation
- Reads the liveness signals from the capture SDK — active liveness challenge results (blink, turn head, smile — actions that a printed photograph or replayed video cannot perform), passive liveness score (texture and depth analysis of the captured frame that distinguishes a live face from a flat image), and anti-spoofing signals (screen replay detection, printed mask detection, 3D mask detection where the SDK supports it).
- Interprets the combination of active and passive signals: a high passive liveness score with a failed active challenge is more likely a technical failure (poor lighting, connectivity issue) than a spoof attack; a low passive score with a borderline active challenge is more likely a presentation attack. The interpretation uses the signal combination, not any single signal in isolation.
- Classifies the liveness outcome on a three-tier scale: Pass (both active and passive above configured thresholds — proceed), Marginal (one signal borderline — refer to VKYC for human-moderated confirmation), and Fail (spoof attack indicators present — flag for human fraud analyst review and block onboarding until resolved). The thresholds are configured by the fraud team and applied by the agent — the agent does not set the thresholds.
- Cannot detect highly sophisticated deepfake injection attacks that operate at the camera API level — where a fraudster injects a pre-recorded deepfake video directly into the camera stream rather than presenting a physical artifact to the camera. These attacks bypass standard active liveness challenges. The agent flags any signal anomalies that may indicate injection (frame rate inconsistency, metadata mismatch, SDK integrity signals) but cannot definitively detect all injection methodologies.
Face Match
Every applicant — selfie vs ID documentInvoked when: liveness capture is complete and the selfie is available for comparison against the ID document photograph
- Compares the live selfie captured during onboarding against the photograph on the submitted ID document (Aadhaar card, PAN card, or passport) using the face match engine — producing a similarity score and a match confidence classification. The match confidence is not a binary pass/fail but a confidence band: High Match, Probable Match, Possible Match, Low Match, No Match.
- Applies different similarity thresholds for different risk contexts: a selfie matched against an Aadhaar card (which uses a compressed JPEG and may be decades old) accepts a lower similarity threshold than a selfie matched against a recent passport photograph. The threshold configuration accounts for the expected quality variation in different ID document types.
- Flags specific face match anomalies beyond the similarity score: a photograph on the ID document that shows a significantly different age from the selfie (suggesting a borrowed ID card), a low-resolution or clearly scanned photograph on the document that prevents reliable matching, or a selfie taken in lighting conditions that significantly degrade matching accuracy (for example, strong backlighting). These are technical quality flags, not fraud determinations.
- Does not match the selfie against a database of known individuals or a watch-list. Face matching is one-to-one (selfie to submitted document) and not one-to-many (selfie to database). One-to-many facial recognition is not performed in the onboarding flow.
Synthetic Identity Flags
Every applicant — demographic and behavioural analysisInvoked when: applicant profile data is available for analysis alongside the biometric signals
- Analyses the applicant's identity profile for synthetic identity indicators — patterns where a real PAN or Aadhaar number has been combined with fabricated or inconsistent demographic attributes. Common patterns: a very recent Aadhaar number combined with a stated age suggesting the Aadhaar was obtained recently for an older individual; a PAN creation date significantly after the date of birth (genuine PANs are typically created when an individual first needs them, usually in their 20s for employment); or a mobile number registered far more recently than the stated age of the borrower.
- Cross-checks the demographic consistency of the submitted profile: date of birth on Aadhaar vs PAN vs stated age on the application, city of residence vs language of the Aadhaar (regional language on the card should be broadly consistent with the stated state of residence), and employer location vs stated home address (a Mumbai employer combined with a rural address in a different state is not inherently suspicious but is a consistency signal worth noting).
- Matches the device fingerprint and session metadata against the synthetic identity pattern corpus — known synthetic identity operations often use the same device, IP subnet, or session timing pattern across multiple fraudulent applications, creating a network signal that individual application analysis cannot see. Flags are produced where current session metadata matches known synthetic identity ring patterns.
Device Fingerprint Analysis
Every applicant — session and device signalsInvoked when: applicant initiates onboarding session and device/session metadata is available from the digital journey platform
- Reads device and session signals from the onboarding platform: device ID, device model, operating system version, IP address and geolocation, VPN / proxy detection, emulator detection flag, rooted/jailbroken device flag, and session behaviour metadata (time-to-complete each step, interaction pattern, number of retries).
- Checks the current device fingerprint against the device fingerprint store — a persistent record of devices used in prior onboarding sessions, flagged for fraudulent activity. A device fingerprint match to a device used in a prior declined or fraud-confirmed application is an immediate Red flag regardless of how clean the current applicant's profile appears.
- Flags device-level anomalies associated with fraudulent onboarding: emulator environments (where a fraudster is running a virtual device to conceal their real device identity), VPN masking of geolocation (hiding the actual application location), rooted devices that may allow bypassing of SDK security controls, and abnormal session behaviour patterns (completing the entire onboarding in under 30 seconds, or using precision scripted interaction patterns that no human would produce naturally).
Hard guardrails
Known limitations
Important Reads
Learn more about how to deploy Fraud Detection Agent AI to your lending workflow.
- Use case #0001Liveness detection vs face match: how Fraud AI uses both togetherFace match and liveness detection are not the same check. A system that does only face match can be defeated with a printed photograph. A system that does only liveness detection can be passed by a live person holding someone else's identity documents. The Fraud Detection Agent AI runs both — simultaneously — because only the combination catches the full range of identity fraud attacks at onboarding.Read article →
- Use case #0002Synthetic identity fraud: the 7 signals Fraud Detection AI checks at onboardingA synthetic identity is not a stolen identity — it is an invented one. It may use a real PAN number combined with a fabricated name and address, or a real Aadhaar number whose photograph has been replaced, or a completely manufactured identity that passes every individual KYC check in isolation but fails when its signals are cross-referenced. The Fraud Detection Agent AI was built to find the cross-reference failures that individual checks cannot catch.Read article →
- Use case #0003Device fingerprinting: how Fraud AI links multiple applications to one fraudsterA fraudster who submits ten applications using ten different identities believes they are invisible — because each identity, in isolation, may look legitimate. What they have not changed is the device. Device fingerprinting does not identify a person. It identifies a machine — and a machine that has submitted ten applications with ten different names, PANs, and Aadhaar numbers is not ten borrowers. It is one fraud operation.Read article →
