Use case #0003

Underwriting AI Compliance: Audit Trail for Every Decision

An underwriting decision made by an AI model must be as explainable and as defensible as one made by a human underwriter — arguably more so, because the regulator will inspect it systematically rather than selectively. The Credit Underwriting AI generates a complete, immutable audit trail for every application: every signal read, every weight applied, every threshold checked, every override considered, and the final decision with the exact factors that drove it. This record exists from the moment the application is received to the moment the decision is communicated.

An underwriting decision made by an AI model must be as explainable and as defensible as one made by a human underwriter — arguably more so, because the regulator will inspect it systematically rather than selectively. The Credit Underwriting AI generates a complete, immutable audit trail for every application: every signal read, every weight applied, every threshold checked, every override considered, and the final decision with the exact factors that drove it. This record exists from the moment the application is received to the moment the decision is communicated.

What Regulators Actually Look For in Automated Underwriting

An RBI inspection team reviewing an institution's automated credit decisioning is asking a specific set of questions. Can the institution explain the basis of any individual credit decision — including the specific signals that caused an approval or rejection? Does the model discriminate, directly or indirectly, against protected categories of borrowers? Is the model validated against a test population before deployment? Is there a human override pathway, and is it used consistently? Can a borrower who disputes a decision receive a meaningful explanation?

These questions are not hypothetical — they are the supervisory reality that the RBI and the incoming DPDP enforcement architecture will impose on any institution that uses algorithmic credit decisioning. An institution whose AI underwriting model cannot answer all five questions — for every decision, at any point in time — is operating with significant regulatory exposure.

The Credit Underwriting AI is built from the ground up to answer all five questions. The audit trail is not a reporting feature added on top of the model — it is a core architectural requirement that shapes how every decision is generated and stored.

"An AI underwriting model that cannot explain its own decisions is not an underwriting system — it is a liability with an approval rate." — Credit Underwriting AI · Compliance Architecture Documentation

The Four Compliance Frameworks the Audit Trail Addresses

RBI Regulatory Framework Primary

Fair Practices Code & Prudential Norms

Every credit decision must be explainable to the borrower on request. The AI generates the explanation as a structural output of the decision process — not a post-hoc rationalisation. Decision rationale is stored with the application record for minimum 5 years as required by RBI record-keeping norms.

RBI Master Direction — Fair Practices Code 4–6
SEBI / NBFC Governance Model Risk

Model Validation & Governance Requirements

Credit models must be validated before deployment and periodically thereafter. The AI maintains model version history, validation test results, champion-challenger performance data, and model risk classification. Every decision records which model version produced it — enabling retroactive impact analysis if a model defect is discovered.

NBFC Model Risk Management Guidelines · RBI Internal Audit Framework
DPDP Act 2023 Automated Decisions

Automated Decision Transparency

The DPDP Act requires data fiduciaries to inform individuals when decisions are made solely by automated means and to provide a meaningful explanation on request. The AI generates a borrower-facing explanation alongside every decision and maintains the right-to-review pathway that the Act mandates.

DPDP Act 6, 7, 11 — Automated Processing Obligations
Algorithmic Fairness Bias Monitoring

Anti-Discrimination & Bias Detection

The audit trail enables ongoing bias monitoring: approval rates, interest rates, and credit limits are tracked by gender, geography, religion-correlated name patterns, and other protected proxies. Statistically significant disparities trigger model review. No protected characteristic is used as a direct feature; proxy variables are regularly tested for discriminatory effect.

RBI Guidelines on Fair Lending · Internal AI Governance Framework

The Complete Per-Decision Audit Trail

Underwriting Decision Audit Trail — Application LA25-8841
Self-Employed LAP · ₹42L · Decision: Approved ₹38L
Nov 14
09:12:04
System — Application Intake
Application received via App. Borrower identity verified: Aadhaar OTP + PAN match. Application checksum generated: [hash]. Data consent logged: bureau pull, income verification, property valuation. DPDP consent record: [ID].
System
Nov 14
09:12:41
Underwriting AI — Data Ingestion Layer
Bureau pull: CIBIL API called, response 718 score. Experian API: 724. Using CIBIL as primary per model config v4.2. Bank statement parsed: 18 months, 186 transactions classified. GST API queried: 12 months turnover data retrieved. All API calls logged with timestamps and response hashes.
Auto
Nov 14
09:13:18
Underwriting AI — Signal Computation
42 signals computed from raw data. High-weight signals: GST turnover trend +33% YoY (score contribution: +0.24); EMI track record 24 months clean (contribution: +0.31); bank balance avg ₹3.8L (contribution: +0.18); CIBIL 718 (contribution: +0.16). Negative signals: 2 late GST filings (contribution: −0.06). Full signal vector stored: [reference].
Auto
Nov 14
09:13:22
Underwriting AI — Policy Rules Engine
Credit policy version 4.8 applied. Checks: FOIR 38.2% (pass <45%); LTV at ₹42L = 67.3% (fail >65% for SE-LAP). LTV constraint applied: max sanction ₹38L = 60.8% LTV (pass). Segment: Self-Employed LAP, Tier 1 city, collateral residential. Policy gate: all mandatory checks passed at ₹38L. Pricing rule: 10.8% applicable.
Auto
Nov 14
09:13:24
Underwriting AI — Decision Engine (Model v4.2)
ML model inference: risk band B+ (estimated default probability 1.8%). Rule-based policy gate: passed at ₹38L. Combined decision: APPROVE ₹38L at 10.8%. Confidence: 0.94. Alternate path considered: ₹42L at 10.9% with additional collateral — rejected (borrower did not offer). Decision rationale vector stored for borrower communication and audit.
Auto
Nov 14
09:14:02
System — Sanction Letter Generation
Sanction letter generated. Borrower communication dispatched via App + WhatsApp. Audit trail sealed: [hash]. Record archived per RBI 5-year retention requirement. CIBIL reporting flagged for post-disbursement. Borrower data access log entry created per DPDP Act.
Auto

Bias Monitoring: The Fairness Dashboard the AI Runs Monthly

Dimension Monitored Approval Rate Avg Interest Rate Avg Sanctioned Amount vs Baseline Disparity Threshold Status
Salaried — Male 68.4% 9.12% ₹58.4L Baseline ±5% approval, ±0.5% rate Clean
Salaried — Female 70.1% 9.08% ₹57.2L +1.7pp approval ±5% approval, ±0.5% rate Clean
Self-Employed — Male 54.2% 10.84% ₹42.1L Segment baseline ±5% approval, ±0.5% rate Clean
Self-Employed — Female 52.8% 10.88% ₹38.4L −1.4pp approval ±5% approval, ±0.5% rate Clean
North-East Geography 41.2% 11.24% ₹28.6L −13.2pp approval ±5% approval, ±0.5% rate Under Review
Age 22–28 (First-time) 58.4% 9.84% ₹34.8L −10pp vs 30–45 segment Age-based segment difference expected Explainable

The Human Override Protocol: When AI Hands Off

Every automated credit decision includes a human override pathway — not as a formality but as a structurally required governance element. Borrowers can request a human review of any automated decision within 30 days. When a human underwriter reviews an AI-declined application and approves it, the override is logged with: the human underwriter's identity, the specific factors that led them to override the model, the risk classification of the override, and the decision rationale. This override data is fed back to the model team quarterly — systematic override patterns indicate model blind spots that need correction.

The override protocol also runs in the other direction: when the AI approves a high-value application above a defined threshold (currently ₹1.5Cr for unsecured, ₹5Cr for secured), a mandatory human underwriter review is triggered before the sanction letter is issued. The AI's recommendation is an input to the human decision, not a replacement for it at these exposure levels.

100%Of decisions with complete immutable audit trail from application to communication
5yrAudit trail retention — RBI record-keeping requirement met by design
MonthlyBias monitoring across gender, geography, and segment dimensions
30 daysHuman review request window — DPDP Act automated decision dispute right

The Audit Trail Is Not the Burden — the Absence of It Is

Institutions that deploy AI underwriting without a complete decision audit trail are not saving compliance cost — they are deferring it. When the regulator asks for the basis of a specific credit decision, or when a borrower files a DPDP Act complaint about automated processing, or when a systemic bias claim requires case-by-case analysis, the institution without an audit trail has no defence. The Credit Underwriting AI's audit architecture is not overhead — it is the institutional protection that makes automated underwriting deployable at scale, with confidence, in a regulated environment.

← Back to Credit Underwriting Agent AI