Use case #0001

What Audit Trail AI Logs for Every Loan Decision — and Why the RBI Requires It

Every credit decision an AI system makes — approve, reject, refer, price — generates a moment of accountability. A human underwriter can be called to explain their reasoning. An AI model cannot be called into a meeting, but its decision can be reconstructed from its logs — if those logs are complete, accurate, timestamped, and tamper-proof. That is precisely what the RBI now expects, and precisely what most lending institutions cannot produce on demand.

Every credit decision an AI system makes — approve, reject, refer, price — generates a moment of accountability. A human underwriter can be called to explain their reasoning. An AI model cannot be called into a meeting, but its decision can be reconstructed from its logs — if those logs are complete, accurate, timestamped, and tamper-proof. That is precisely what the RBI now expects, and precisely what most lending institutions cannot produce on demand.

The Accountability Gap in AI-Driven Lending

When a lending institution transitions from human underwriting to AI-assisted or fully automated credit decisioning, it gains speed and consistency. It also inherits a new obligation: the ability to explain, reconstruct, and defend every decision the AI made — not in general terms, but for any specific application, at any point in the loan's lifecycle, years after the decision was made.

A borrower who was declined in March 2024 and files a grievance in November 2025 is entitled, under the RBI's Fair Practices Code and the DPDP Act, to a meaningful explanation of why their application was rejected. The institution must be able to produce: the specific factors that drove the rejection, the data that was used in the decision, the model version that produced it, and the policy rules that were applied. If the institution cannot produce this within a reasonable timeframe, it has not just a customer service problem — it has a regulatory compliance exposure.

The Audit Trail AI solves this problem at the source. Rather than trying to reconstruct explanations from fragmented system logs after the fact, it creates a complete, structured, legally defensible record at the moment every decision is made — and preserves that record in a tamper-proof store for the full retention period.

"The RBI does not ask whether you made good credit decisions. It asks whether you can prove it — for any specific borrower, on any specific date, at any time an inspector chooses to ask."

The Decision Log: What Gets Captured in Real Time

The following is a live audit log entry — exactly as generated by the Audit Trail AI at the moment of a home loan credit decision. Every field is populated at decision time, before the decision is communicated to the borrower. The record is sealed immediately after generation and cannot be modified.

AUDIT LOG ENTRY · APPLICATION LA-2025-8841 · GENERATED 14 NOV 2025 09:13:24.841 IST
// IDENTITY & CONSENT
application_id: "LA-2025-8841"
borrower_id_hash: sha256("PAN:ABCDE1234F") → "a7f3c9..." [PAN not stored in log]
identity_verified: "aadhaar_otp_confirmed" | timestamp: "2025-11-14T09:12:04.112Z"
data_consent_ref: "DPDP-CONSENT-2025-8841" | sources_consented: ["bureau","bank_stmt","gst","property_val"]
// DATA INPUTS (hashed — raw data not retained)
bureau_score: 718 | bureau_pull_ref: "CIBIL-API-20251114-8841"
income_monthly: 118400 | income_source: "bank_statement_aa"
property_value: 3410000 | valuation_ref: "VAL-2025-1184"
foir_computed: 0.382 | ltv_computed: 0.703
signals_vector_hash: sha256(all_42_signals) → "b2e8a1c4..."
// MODEL INFERENCE
model_id: "CREDIT_SCORECARD_V4.2" | model_version_hash: "f1a9b3..."
model_score: 724 | risk_band: "B+" | pd_estimate: 0.018
top_factors: [{factor:"gst_trend",contribution:+0.24},{factor:"emi_track",contribution:+0.31},{factor:"bureau_score",contribution:+0.16},{factor:"ltv",contribution:-0.09}]
// POLICY RULES ENGINE
policy_version: "CREDIT_POLICY_V4.8"
foir_check: PASS (0.382 < 0.450) | ltv_check: PASS (0.703 < 0.800)
bureau_minimum: PASS (718 >= 700) | sanctions_check: PASS (cleared all 8 lists)
// DECISION
decision: "APPROVE" | sanctioned_amount: 2400000
pricing_rule: "BAND_B_PLUS_SECURED" | rate: 0.114
decision_timestamp: "2025-11-14T09:13:24.841Z"
human_override: "none" | auto_decision: true
RECORD SEALED · sha256(full_entry) → "d4f7a2e1b9c3..." · IMMUTABLE FROM THIS TIMESTAMP · CHAIN LINK: entry_8840_hash prepended

The Regulatory Obligations That Require This Level of Detail

Regulatory Requirement Source What It Demands What the Log Provides
Adverse action explanation RBI Fair Practices Code 4–6 Rejected applicants must receive a specific, reason-coded explanation — not a generic decline Factor contributions, policy rule outcomes, and rejection reason codes — all generated from the log at decision time
Automated decision transparency DPDP Act 2023 11 Individuals must be informed when decisions are made by automated means and must be able to obtain an explanation Model ID, version, factor contributions, and a plain-language explanation — all derivable from the sealed log entry
Data source disclosure DPDP Act 6 + RBI KYC MD Borrowers must know which data sources were used in their credit assessment Every data source with consent reference and API call reference — logged and retrievable per decision
Model version traceability RBI Model Risk Management Circular 4 The institution must be able to identify which model version produced any individual credit decision Model ID, version hash, and credit policy version — logged per decision, immutable
Record retention PMLA 12(2) + RBI record-keeping norms All credit decision records must be retained for minimum 5 years from account closure Tamper-proof log with automated 5-year retention enforcement and retrieval reference
Override documentation RBI Prudential Norms — credit policy deviation Any human override of an automated decision must be documented with the overriding officer and rationale Override flag, override officer ID, and rationale — logged at override time with human identity stamp

What the Log Does Not Store — and Why That Matters

The Audit Trail AI logs the evidence needed for governance accountability without storing sensitive personal data unnecessarily. The borrower's PAN is stored only as a salted hash — not in plain text. Raw bank statement transaction details are not retained in the log — only the computed income metrics derived from them. The Aadhaar number is never stored. Raw document images are referenced by a document management system reference number, not reproduced in the log.

This data minimisation architecture simultaneously satisfies two requirements that can appear to conflict: the DPDP Act's data minimisation obligation (collect and retain only what is necessary for the purpose) and the RBI's auditability requirement (retain sufficient evidence to reconstruct and defend the decision). The Audit Trail AI is designed from the ground up so that these requirements are complementary, not competing. The log contains everything needed for governance and nothing that should not be stored.

100%Of AI credit decisions logged — approvals, rejections, referrals, overrides, and pricings
SealedEvery log entry cryptographically sealed at the millisecond of decision — immutable from that point
6Regulatory obligations satisfied by the audit log — from Fair Practices Code to DPDP Act to PMLA
PAN-freeSensitive identifiers stored only as hashed references — DPDP data minimisation by design

The Audit Log Is Not a Compliance Feature — It Is the Foundation of Every Other Compliance Claim

Every borrower protection obligation, every fairness claim, every model governance assurance the institution makes rests on the same foundation: a complete and accurate record of what the AI actually did at the moment of each decision. Without that record, every other compliance effort is assertion without evidence. The Audit Trail AI creates that foundation automatically, for every decision, before the decision leaves the system. What the institution then builds on top of it — explainability, bias monitoring, model governance reporting — is only credible because the foundation is sound.

← Back to Audit Trail Agent AI