← Agent catalogue

AI Agent Profile · LendingIQ · Bengaluru

Credit Underwriting Agent AI

Invoked via: loan origination system APIRuntime: AWS Bedrock · ap-south-1Model: Claude Sonnet 4Context window: 200K tokens

DivisionRisk division

Resume

What this agent does

The Credit Underwriting Agent AI reads a complete loan application file, applies the current credit policy, interprets the bureau report and alternate data signals, identifies contradictions between stated and verified financials, and produces a structured credit verdict with a plain-language explanation of the decision. It is the only agent in the LendingIQ AI workforce that produces a decision on an individual borrower. Within Level 1 parameters it decides autonomously. Above Level 1 it prepares the case and the recommendation — a human credit officer makes the final call.

Primary functions

Scorecard Execution

Every application — synchronous

Invoked when: application submitted to the loan origination system with a complete or near-complete document set

  • Applies the current credit policy scorecard — retrieved via RAG at invocation time, always the live version — to the application data: bureau score against the segment cut-off, FOIR (Fixed Obligation to Income Ratio) computed from verified income and declared obligations, LTV (Loan to Value) computed from declared or assessed collateral, leverage ratio for MSME borrowers, and sector exposure check against portfolio concentration limits.
  • Computes each scorecard parameter from the verified data, not the declared data — if the GST-verified turnover is lower than the stated turnover, the FOIR is computed on the GST figure. If the bank statement cash flow is lower than the ITR income, it flags the discrepancy and computes both versions, presenting both to the decision output so the human reviewer understands the spread.
  • Applies hard filters first — a borrower who is below the bureau cut-off or above the sector exposure limit gets a hard decline regardless of other scorecard dimensions. These are non-negotiable policy boundaries. The agent does not override hard filters, does not recommend exceptions to hard filters, and does not present arguments for why a hard-filter decline might be revisited.
  • Produces a scorecard summary that shows the applicant's position on every parameter — not just pass/fail, but the actual figure against the limit, so the human reviewer can immediately see which parameters are strong, which are marginal, and which are failing, without recalculating anything themselves.
Output: Scorecard summary — each parameter with applicant figure, policy limit, and pass/fail status; hard filter results; overall scorecard verdict (Approve / Refer / Decline); and a confidence level based on data completeness and margin above or below each limit.

Bureau Interpretation

Every application — synchronous

Invoked when: bureau report is retrieved as part of the application processing flow

  • Reads the full bureau report — not just the summary score — including the tradeline detail: every credit facility, its current status, payment history (DPD by month), outstanding balance, sanctioned amount, and the lender name. The summary score is a starting point; the tradeline tells the story the score does not.
  • Identifies bureau signals that the score alone does not capture: a borrower with a 720 score who has a single large DPD-60 on a secured loan 18 months ago is materially different from a 720-score borrower with a clean tradeline — the score is the same, the risk profile is not. The agent narrates what the tradeline means for this specific application.
  • Flags bureau-specific red signals regardless of score: a loan that was written off or settled (even if the score has partially recovered), a borrower who appears on multiple bureau reports under slightly different names or addresses (identity risk), or a tradeline that shows a new unsecured loan taken within the last 90 days that was not declared in the application (undisclosed liability).
  • For thin-file borrowers — those with limited or no bureau history — does not invent a risk assessment. It states explicitly that the bureau signal is insufficient for score-based decisioning, flags the case as requiring alternate data or L2 human review, and lists the additional information that would be needed to make a decision. A thin file is not a clean file; it is an unknown file.
Output: Bureau interpretation narrative — score in context, tradeline summary, red flag list with specific tradeline references, undisclosed liability check, thin-file flag if applicable, and a bureau risk assessment (Clean / Marginal / Concern / Red flag) separate from the score band.

Alt Data Integration

Applied where bureau is thin or contradictions exist

Invoked when: thin-file borrower, bureau signal is inconclusive, or stated financials require independent verification via alternate data sources

  • Reads the alternate data signals available at invocation — Account Aggregator bank statement data (12-month cash flow, average monthly balance, income credits, EMI debits, bounce frequency), GST filing data (turnover, filing regularity, GST-to-income ratio), and ITR data (declared income, ITR filing history, tax paid vs income claimed) — and synthesises them into a financial health picture that either corroborates or contradicts the stated application data.
  • Applies a specific corroboration logic: for an MSME borrower claiming ₹50 lakh annual turnover, the agent checks whether the bank credits over 12 months are consistent with that claim, whether the GST-declared turnover is within a reasonable range of the bank credit figure, and whether the ITR income is consistent with both. Where all three corroborate, the confidence is high. Where they diverge, it states the specific figures and the direction of divergence.
  • Weights alt data signals differently by source reliability: bank statement data from an Account Aggregator pull is higher reliability than a self-submitted bank statement PDF because it cannot be tampered with. GST data from the GSTN API is authoritative. ITR data via AIS (Annual Information Statement) is authoritative. Self-submitted documents require manual verification before they can be treated as reliable inputs.
  • Does not use alt data to override a hard bureau filter. If the bureau score is below the hard cut-off, alt data showing strong cash flow is presented as contextual information for a potential exception review — it does not automatically lift the case above the cut-off. Policy exceptions require human authority regardless of how strong the alternate data is.
Output: Alt data integration summary — each signal source with the figure derived, corroboration or contradiction verdict against stated financials, confidence level per source based on data provenance (API-authoritative vs self-submitted), and overall financial health assessment based on the alt data picture.

Decision + Explanation

Every application — final output

Invoked when: scorecard, bureau interpretation, and alt data integration are complete and a final verdict is required

  • Synthesises the scorecard result, bureau interpretation, and alt data picture into a single structured credit verdict: Approve (within policy, autonomous), Refer to L2 (within policy but with flags or thin data requiring human review), Refer to L3 (policy exception or high-risk indicator requiring senior credit officer), or Decline (hard filter triggered or risk profile outside policy parameters).
  • Produces a plain-language explanation of the verdict designed for two audiences simultaneously: the credit officer (who needs the technical basis — which policy parameter, which data point, which bureau finding drove the decision) and the borrower (who needs a non-technical statement of the reason if the application is declined or conditioned, as required by RBI's guidelines on transparency in algorithmic lending).
  • For approved applications: states the approval conditions — any documentation still required before disbursement, any covenants or monitoring requirements attached to the approval, and the expiry date of the approval if the borrower does not proceed within the validity period.
  • For declined applications: states the specific reason — not a generic "does not meet credit criteria" but the specific parameter that failed (e.g., "FOIR of 58% exceeds the policy limit of 50% for this product segment based on GST-verified income of ₹X") — because this is the explanation the borrower is entitled to under RBI guidelines and the one the credit officer must be able to stand behind if questioned.
Output: Structured JSON verdict — decision (Approve/Refer L2/Refer L3/Decline), confidence level, decision drivers with data citations, approval conditions or decline reasons, two-audience explanation (technical for credit officer; plain-language for borrower), and the escalation routing if the decision is a Refer.

Knowledge base

Credit Policy Corpus (RAG — live)

The current credit policy — every eligibility criterion, hard filter, soft guideline, sector limit, and product-specific rule. Retrieved at invocation, always the live version. A policy change takes effect immediately on the next application processed.

Bureau Report (real-time pull)

CIBIL, Experian, or CRIF full report — score, tradeline detail, DPD history, enquiry log — pulled at invocation for each application. Not cached or reused across applications.

Account Aggregator Data

12-month bank statement data via AA framework — authoritative, tamper-proof, borrower-consented. The highest-reliability alternate data source for income and cash flow verification.

GSTN & ITR Data

GST return data from GSTN API and ITR data via AIS — authoritative government sources for turnover and income verification. Applied for MSME and self-employed borrowers.

Application Documents

KYC documents, financial statements, salary slips, property documents — uploaded by the borrower or collected by the sales team. Reliability is lower than API-sourced data; cross-verification is always applied.

Credit Underwriting Knowledge

Pre-training knowledge of Indian credit underwriting, NBFC lending products, financial ratio analysis, bureau interpretation, and MSME and retail credit assessment up to knowledge cutoff.

How decisions are formed

One application at a time
Each invocation processes a single application in isolation. The agent has no memory of prior applications — it cannot be influenced by the last 10 applications it processed or develop a "feel" for a particular segment. Every application starts from the same policy baseline.
Policy as the decision frame
The credit policy is the decision frame, not a reference document. Every decision element is anchored to a specific policy clause. The agent does not apply judgment that goes beyond the policy — it applies the policy and flags where judgment is required for a human.
Verified data over stated data
Where API-verified data conflicts with stated data, the verified figure is used for the decision calculation and the conflict is flagged explicitly. The agent does not average the two or use the more favourable figure for the borrower.
Confidence and uncertainty
Every verdict includes a confidence level. Low confidence always triggers a Refer — the agent does not approve a case it is uncertain about. Uncertainty can arise from missing data, conflicting signals, or an application type outside the policy's explicit scope.
Explainability is non-negotiable
Every decision — approve, refer, or decline — comes with a specific, cited explanation. The agent will not produce a verdict without an explanation. A decision the agent cannot explain in plain language citing specific data and policy is a decision the agent will not make autonomously — it will Refer to human.

Hard guardrails

Will notSanction a loan or communicate a decision to the borrower. The verdict is a structured output to the loan origination system for the credit officer to review, approve, and communicate. No borrower receives a decision directly from this agent.
Will notOverride a hard policy filter. If the bureau score is below the hard cut-off, the sector exposure limit is breached, or a hard KYC requirement is unmet, the decision is a Decline regardless of other factors. Hard filter overrides require human credit authority and a documented exception process.
Will notProcess an application without a compliance-complete document set for the applicable KYC category. Missing mandatory KYC documents trigger an "Incomplete — return for completion" status, not a credit decision. The agent does not underwrite on an incomplete file.
Will notUse self-submitted documents as the primary verification basis without flagging the lower reliability. Bank statements submitted as PDFs are always noted as self-submitted and lower reliability than AA-sourced data. The decision output states the verification basis for every material data point.
Will notProduce a decision that discriminates on grounds not permitted in credit decisioning — religion, caste, gender, or region beyond what the credit policy explicitly permits for risk management reasons. Any input that functions as a proxy for a protected characteristic is excluded from the decision logic.

Known limitations

The agent is only as current as the policy retrieved at invocation. If a policy change is made but the RAG corpus has not been updated, the agent will apply the old policy. A policy change that is not reflected in the corpus within 24 hours creates a window where applications are decided against an outdated policy — which creates both credit risk and potential borrower treatment inconsistency.Build an automated RAG corpus update trigger whenever the credit policy is amended — the policy document version in the corpus must match the authorised version in the policy management system. Never allow a policy version mismatch to persist beyond end of day.
The agent produces qualitative risk assessments, not calibrated default probabilities. The confidence level in the output reflects data completeness and margin above/below policy limits — it is not an actuarially validated PD estimate. Using the agent's confidence level as a substitute for a quantitative PD model in provisioning or capital calculations would be a misapplication of the output.Maintain a separate statistical PD model for portfolio-level provisioning and capital calculations. The agent's per-application decision is the origination gate; the portfolio PD model is the measurement tool. These are different instruments serving different purposes.
Alt data interpretation is segment-dependent in ways the agent may not fully account for. A seasonal business with irregular bank credits — a mango trader, a wedding photographer, a construction contractor — will show a bank statement pattern that looks like cash flow stress to a general analysis but is normal for that business type. The agent applies general financial health logic; segment-specific cash flow patterns require human underwriter judgment for novel cases.Build a library of segment-specific cash flow pattern notes into the credit policy corpus — what "normal" looks like for a list of high-frequency MSME segments. This gives the agent the context to distinguish seasonal variation from genuine distress rather than applying uniform cash flow standards across structurally different businesses.
Fraud detection is pattern-based, not forensic. The agent flags contradictions between documents and data sources, identifies undisclosed liabilities, and detects name/address inconsistencies across bureau reports. It cannot detect sophisticated document fabrication, collusion between borrower and intermediary, or coordinated fraud rings — these require forensic document analysis and human investigation capabilities the agent does not have.Do not position this agent as the fraud detection layer. It is the first-line filter for obvious inconsistencies. A dedicated fraud detection tool, periodic manual sampling of approved applications, and a human fraud investigation team are required layers above what this agent provides.
The plain-language borrower explanation is produced in English by default. For borrowers whose primary language is Hindi, Tamil, Telugu, Kannada, or another regional language, the English explanation may not be accessible — which creates a regulatory risk under RBI's fair practice requirements that information be communicated in a language the borrower understands.Build a language selection parameter into the invocation API so the explanation is produced in the borrower's preferred language at the time of the call. The agent can produce the explanation in all major Indian languages — this is a product configuration decision, not a capability limitation.
Agent Profile · Credit Underwriting Agent AI · LendingIQ · BengaluruLast updated April 2026 · For internal use

Important Reads

Learn more about how to deploy Credit Underwriting Agent AI to your lending workflow.