AI Agent Profile · LendingIQ · Bengaluru
Model Risk Manager AI
DivisionRisk division
Resume
What this agent does
The Model Risk Manager AI maintains the model risk inventory — keeping model cards current, running scheduled bias tests across all production models, triaging drift alerts from the monitoring pipeline, and updating the model risk register with findings that the Board Risk Committee and CRO AI need to make governance decisions. It is the continuous oversight layer of the model governance function. Validation Agent AI runs the deep-dive checks; this agent runs the ongoing risk management between those checks.
Primary functions
Model Card Updates
Triggered on version change or quarterly cycleInvoked when: a model version changes, a material finding is made in monitoring, or the quarterly model card review cycle is due
- Reads the current model card alongside the new performance data, any validation findings since the last card update, and any changes to the model's inputs, training data, or deployment scope — and produces an updated model card that reflects the model's current state accurately.
- Tracks model card completeness against the model risk policy standard — required fields include: model purpose and scope, development methodology summary, training data description, input feature list with data sources, known limitations and failure modes, bias testing results, performance metrics with last-updated date, approval status and approver, and scheduled review date.
- Flags model cards that are outdated relative to the model's current production performance — a card that shows Gini 72% when the model is currently running at Gini 65% is misleading to any stakeholder who reads it and constitutes a model risk governance gap.
Bias Testing
Quarterly for all production modelsInvoked when: quarterly bias testing cycle runs, or a Fair Lending AI finding triggers an ad-hoc bias review for a specific model
- Runs the bias testing suite across all production models using the configured bias dataset — approval rate disparity by geography (state/district), income band, and any proxy variables identified by the Fair Lending AI. Computes adverse impact ratios and disparate impact metrics for each protected-characteristic proxy.
- Distinguishes between bias in the model (a feature that proxies for a protected characteristic and drives differential outcomes) and bias in the data (historical outcome data that reflects prior discriminatory practices). The former requires model remediation; the latter requires careful feature engineering decisions. The bias report labels which type is detected.
- For models where bias is detected above the policy threshold: produces a bias finding report with the specific feature, the proxy relationship, the magnitude of disparate impact, and the options for remediation — feature removal, fairness constraint, re-weighting, or model replacement. Does not remediate the model; documents the finding for the data science team to address.
Drift Alert Triage
Monthly monitoring and on intra-month triggerInvoked when: monthly performance data triggers a drift alert, or an intra-month approval rate or NPA rate anomaly warrants a mid-cycle check
- Reads all active drift alerts from the monitoring pipeline — PSI, CSI, Gini degradation, NPA rate divergence — and triages them by severity, urgency, and likely cause. A PSI of 0.12 that has been stable for 3 months is a "monitor" triage; a PSI of 0.12 that jumped from 0.04 in a single month is an "investigate immediately" triage.
- Aggregates drift signals across models to identify systemic patterns: if three models are simultaneously showing population drift in the same direction, the common cause is likely a portfolio composition change or a macro shift, not individual model failure — and the response is different from addressing three separate model-specific issues.
- Updates the model risk register with each triage outcome, the evidence, and the recommended next action — so the governance committee always has a current picture of the model risk landscape, not a point-in-time snapshot from the last quarterly review.
Knowledge base
Model Registry (full inventory)
All production, development, and retired models — version history, approval status, owner, deployment scope, and scheduled review dates. The master record of the model estate.
Model Risk Policy (RAG)
Risk tiers, documentation standards, bias thresholds, drift trigger levels, end-of-life criteria. The policy framework applied in every model card, bias test, and drift triage output.
Model Performance Database
Monthly performance metrics for every production model — Gini, KS, PSI, CSI, approval rate, NPA rate. The longitudinal dataset that makes drift triage and trend analysis possible.
Bias Testing Dataset
Protected characteristic proxy variables and segment-level performance data. The input to every quarterly bias test. Must be maintained and updated as the portfolio mix evolves.
Validation Findings Archive
All Model Validation Agent AI findings — the independent assessment record that feeds the model risk register maintained by this agent.
Model Risk Knowledge
Pre-training knowledge of model risk management frameworks, Basel model risk principles, bias testing methodologies, and ML governance standards up to knowledge cutoff.
Hard guardrails
Known limitations
Important Reads
Learn more about how to deploy Model Risk Manager AI to your lending workflow.
- Use case #0001How Model Risk AI Runs Annual Independent Validation AutomaticallyThe RBI requires that every credit model in production at an NBFC or bank be independently validated before deployment and at least annually thereafter. For most institutions, this means hiring an external firm, waiting 6 to 8 weeks, receiving a report that is already partially outdated by the time it arrives, and paying ₹15 to 25 lakhs for the exercise. LendingIQ's Model Risk Manager AI runs the same validation — continuously, with daily performance data, and with the documentation package the RBI actually wants to see — as a built-in capability of your lending stack.Read article →
- Use case #0002Bias Testing Automation: How Model Risk AI Checks for Disparate Impact MonthlyA credit model can be statistically accurate and legally discriminatory at the same time. A variable that appears neutral — postcode, employment sector, business registration type — may be a near-perfect proxy for a protected characteristic, producing systematically lower approval rates for women, certain communities, or geographic minorities without any intent to discriminate. LendingIQ's Model Risk Manager AI tests for this every month, automatically, and flags disparities before they become enforcement actions.Read article →
- Use case #0003Model Card Management: Keeping Your AI Governance Docs CurrentA model card is the canonical governance document for an AI model — it records what the model does, how it was built, what data it was trained on, how it performs across different population segments, its known limitations, and the fairness metrics at the time of deployment. Most model cards in Indian lending institutions are accurate at the moment they are written and progressively stale from that moment forward. LendingIQ's Model Risk Manager AI keeps every model card current automatically — because a stale model card is not a governance document. It is a governance gap.Read article →
