The value of the Document Verification Agent AI is not just in what it rejects — it is in what it approves without escalation. Every document that is auto-approved is a document that does not consume underwriter review time. Every document that is correctly escalated is a document that reaches the right human at the right level with the right context. The exception framework determines both outcomes simultaneously.
Why the escalation boundary is the most important design decision
Set the auto-approval threshold too high and most documents escalate, defeating the purpose of automation. Set it too low and the system approves documents that should have been reviewed, creating downstream risk. The exception framework in the Document Verification Agent AI sets the boundary based on a principle: the AI auto-approves when it has high confidence that a human reviewer would reach the same conclusion, and escalates when the decision requires context, judgment, or authority that the AI cannot provide.
This means the auto-approval threshold is calibrated against actual human reviewer outcomes on a validated dataset — not set arbitrarily. When the AI produces an authenticity score of 94 or above with no policy flags, human reviewers in validation have agreed with the auto-approval result 99.2% of the time. When the authenticity score is between 75 and 93, human reviewer outcomes diverge more significantly, and escalation is appropriate. Below 75, the forgery probability is high enough that the decision requires both human judgment and a documented rationale.
The four-tier exception framework
Auto-Pass
Document is authentic and policy-compliant — auto-approved, no human review required
All 11 forgery signals return pass or low-severity amber. Authenticity score 90 or above. All applicable income policy checks pass. No cross-document discrepancies above tolerance. The document is written to the LOS as verified with no flags. The credit underwriter receives the file with all documents marked as verified — they do not need to review the raw documents unless they choose to.
Today's volume: 74.2% of all document submissions · Zero human review time required
Soft Flag
Document is likely authentic but has one or more signals that require credit team awareness
One or two amber signals are present — not forgery indicators, but data anomalies that the credit team should be aware of. Examples: a minor bank credit discrepancy (within 5%, likely variable reimbursements), an employment letter 2 days outside the 90-day window, a Form 16 vs salary slip variance of 11% (below the 15% fail threshold but above the 10% context note threshold). The document proceeds to the credit queue but is marked with a soft flag — the credit team sees the flag and the explanation, and decides whether to request clarification or proceed on the basis of the document as-is.
Today's volume: 18.4% of submissions · Credit team sees flag and decides — typically 3–5 minutes of review per flagged document
Hard Flag
Document has a material anomaly — processing held pending specific resolution
One red signal is present, or the authenticity score has dropped into the 60–74 range, or a material policy gap exists (e.g., GST returns covering only 6 quarters when 8 are required). Processing on this file is held — the credit underwriter cannot access the file for credit review until the document exception is resolved. An exception brief is generated specifying exactly what is anomalous, why the processing is held, and what specific document or information would resolve the hold. The brief goes to the origination team, not the credit team — this is an origination problem, not a credit problem.
Today's volume: 5.8% of submissions · File held · Origination team resolves · Average resolution: 18 hours
Fraud Alert
Document exhibits multiple strong forgery signals — fraud team alert, application suspended
Two or more red signals are present, or the authenticity score is below 60. The application is suspended immediately — not returned to origination for correction, but flagged as a potential fraud case. The fraud alert goes to the fraud team and the CCO, not to the origination team. The borrower is not notified of the reason for the application suspension in specific terms (to avoid alerting a fraudster to which signals failed). The device fingerprint, IP address, and identity data are cross-checked against the fraud consortium database. A detailed forensic report is generated for the fraud team's review.
Today's volume: 1.6% of submissions · Application suspended · Fraud team alerted · Consortium cross-check triggered
Today's exception distribution across 1,284 document sets
The 12 exception scenarios and their routing
All 11 signals pass · FOIR 38% · All policy checks clear
Standard genuine salaried application. No anomalies. All policy requirements met. Document auto-approved. LOS record updated with verified status. Credit team receives pre-cleared file.
→ Auto-pass · 0 minutes human reviewEPFO prior month rounding discrepancy · 87% score
PF shown on slip ₹6,240 vs EPFO passbook ₹6,000 in prior month. Likely rounding adjustment at payroll system level. Score 87% — above auto-pass threshold but worth noting. Soft flag with explanation. Credit team decision: proceed, the variance is within normal payroll processing tolerance.
→ Soft flag · 3 min credit review · Typically proceedsEmployment letter 92 days old · 2 days outside 90-day window
Genuine employment letter, all content verified — but dated 92 days before application vs 90-day policy requirement. Soft flag: credit team may accept with documented rationale (minimal exposure, genuine document) or request fresh letter. Neither a hard block nor auto-approval.
→ Soft flag · credit team decision on policy exceptionSecondary salary credit in bank statement — 3 of 18 months
Bank statement shows a second credit in 3 non-consecutive months (₹12,000–₹18,000). Likely freelance income or family transfer. Not an income inflation red flag but warrants a credit team note. Soft flag with explanation: secondary credit detected, not included in eligible income computation.
→ Soft flag · credit team to classify sourceITR submitted but no acknowledgement number — unfiled ITR
Form ITR-3 submitted as income proof — document appears genuine but acknowledgement number search on income tax portal returns no match. ITR prepared but not filed. Cannot be used as income proof under credit policy. File held. Origination briefed to request filed ITR with acknowledgement.
→ Hard flag · file held · origination action requiredSalary slip vs bank credit discrepancy — 18% for 2 months
Stated net salary on slip: ₹84,000. Bank credit for same months: ₹68,800 and ₹71,200 respectively — divergence of 15–18%, well above 10% tolerance. One genuine document has the correct figure; the other may have been inflated. File held. Origination asked to obtain explanation and supporting documentation.
→ Hard flag · both documents held · explanation requiredEmployer GST not found · Company not in MCA registry
Employer on salary slip: "InfoSystems Pvt Ltd, Pune." GSTIN database: no match. MCA registry: no match. No digital footprint of employer exists. This may be a legitimate small company operating below GST threshold, or a shell employer. File held. Origination to obtain employer registration certificate and banking letter confirming employment relationship.
→ Hard flag · file held · employer verification requiredPDF created in Canva · Math inconsistency · TDS implausible
Three red signals simultaneously: creator metadata shows Canva, gross salary components do not sum to stated gross, and TDS amount is ₹400 on a gross of ₹1,05,000 (arithmetically impossible under any tax scenario). Score: 31. Application suspended. Fraud team alerted. Device fingerprint cross-checked against fraud consortium. Three concurrent red signals indicate coordinated forgery, not data errors.
→ Fraud alert · application suspended · fraud team + CCO notifiedBank statement template match · Known fraudster forum source
Submitted bank statement compression patterns and formatting markers match a template circulating in the AI's fraud pattern library — a template previously identified from confirmed forgery cases. Score: 28. Even though the document "looks" like a genuine HDFC statement visually, the digital signatures match the fraudulent template. Application suspended. Forensic report generated for fraud team.
→ Fraud alert · pattern library match · forensic report generatedSelf-employed — only 6 quarters GST submitted vs 8 required
Business registered 20 months ago — only 6 quarters of GSTR data available (business did not exist for the required 8 quarters). Not a forgery; a business vintage limitation. Soft flag with eligibility note: current product requires 24-month GST history — borrower may be eligible for a different product tier or thin-file pathway. Credit team informed to discuss product alternatives with borrower.
→ Soft flag · product eligibility discussion requiredName slight variation between Aadhaar and PAN — abbreviation
Aadhaar: "Priya Ramachandran Sharma." PAN: "P. R. Sharma." Name reconciliation confidence: 96% — recognised abbreviation pattern. Score unaffected. Auto-resolved with reconciliation note. Not flagged to credit team — this is a known and common Indian name rendering variation handled entirely within the Origination AI's name reconciliation layer.
→ Auto-resolved · no flag · reconciliation note loggedForm 16 employer TAN not found in TRACES filing database
Form 16 submitted with employer TAN AAACT12345A. TRACES database check: no TDS filings found under this TAN for the assessment year stated. Either the TAN is incorrect, the employer has not filed TDS returns, or the Form 16 is fraudulent. File held. Origination to obtain employer TDS filing confirmation or alternative income proof. Escalated as Tier 3 (not Tier 4) because the evidence is inconclusive — employer error is possible.
→ Hard flag · TRACES non-match · origination to resolveThe 74.2% is the ROI — the 1.6% is the protection
The Document Verification Agent AI's value proposition is not primarily in catching fraud — it is in clearing the 74% of documents that do not need human attention so that human attention is available for the 26% that do. A human reviewing 1,284 document sets per day at 15 minutes each would spend 320 person-hours. The same human reviewing only the 330 documents that are flagged spends 27.5 person-hours — on the documents that actually require judgment. The 74.2% auto-pass rate is not a risk — it is a design achievement. The system auto-approves exactly what experienced reviewers would approve, at a fraction of the time cost, with a documented verification trail for every file.
