← Agent catalogue

AI Agent Profile · LendingIQ · Bengaluru

Model Risk Manager AI

Invoked via: model governance pipelineRuntime: AWS Bedrock · ap-south-1Model: Claude Sonnet 4Context window: 200K tokens

DivisionRisk division

Resume

What this agent does

The Model Risk Manager AI maintains the model risk inventory — keeping model cards current, running scheduled bias tests across all production models, triaging drift alerts from the monitoring pipeline, and updating the model risk register with findings that the Board Risk Committee and CRO AI need to make governance decisions. It is the continuous oversight layer of the model governance function. Validation Agent AI runs the deep-dive checks; this agent runs the ongoing risk management between those checks.

Primary functions

Model Card Updates

Triggered on version change or quarterly cycle

Invoked when: a model version changes, a material finding is made in monitoring, or the quarterly model card review cycle is due

  • Reads the current model card alongside the new performance data, any validation findings since the last card update, and any changes to the model's inputs, training data, or deployment scope — and produces an updated model card that reflects the model's current state accurately.
  • Tracks model card completeness against the model risk policy standard — required fields include: model purpose and scope, development methodology summary, training data description, input feature list with data sources, known limitations and failure modes, bias testing results, performance metrics with last-updated date, approval status and approver, and scheduled review date.
  • Flags model cards that are outdated relative to the model's current production performance — a card that shows Gini 72% when the model is currently running at Gini 65% is misleading to any stakeholder who reads it and constitutes a model risk governance gap.
Output: Updated model card with all required fields, staleness flags for any field not updated within its required review cycle, and a model card completeness score against the policy standard for the governance committee's review.

Bias Testing

Quarterly for all production models

Invoked when: quarterly bias testing cycle runs, or a Fair Lending AI finding triggers an ad-hoc bias review for a specific model

  • Runs the bias testing suite across all production models using the configured bias dataset — approval rate disparity by geography (state/district), income band, and any proxy variables identified by the Fair Lending AI. Computes adverse impact ratios and disparate impact metrics for each protected-characteristic proxy.
  • Distinguishes between bias in the model (a feature that proxies for a protected characteristic and drives differential outcomes) and bias in the data (historical outcome data that reflects prior discriminatory practices). The former requires model remediation; the latter requires careful feature engineering decisions. The bias report labels which type is detected.
  • For models where bias is detected above the policy threshold: produces a bias finding report with the specific feature, the proxy relationship, the magnitude of disparate impact, and the options for remediation — feature removal, fairness constraint, re-weighting, or model replacement. Does not remediate the model; documents the finding for the data science team to address.
Output: Quarterly bias test report — adverse impact ratios by segment and model, features identified as protected characteristic proxies, bias type classification (model vs data), findings above policy threshold flagged as requiring remediation, and a bias risk rating per model (Clean / Monitor / Remediate).

Drift Alert Triage

Monthly monitoring and on intra-month trigger

Invoked when: monthly performance data triggers a drift alert, or an intra-month approval rate or NPA rate anomaly warrants a mid-cycle check

  • Reads all active drift alerts from the monitoring pipeline — PSI, CSI, Gini degradation, NPA rate divergence — and triages them by severity, urgency, and likely cause. A PSI of 0.12 that has been stable for 3 months is a "monitor" triage; a PSI of 0.12 that jumped from 0.04 in a single month is an "investigate immediately" triage.
  • Aggregates drift signals across models to identify systemic patterns: if three models are simultaneously showing population drift in the same direction, the common cause is likely a portfolio composition change or a macro shift, not individual model failure — and the response is different from addressing three separate model-specific issues.
  • Updates the model risk register with each triage outcome, the evidence, and the recommended next action — so the governance committee always has a current picture of the model risk landscape, not a point-in-time snapshot from the last quarterly review.
Output: Drift alert triage report — all active alerts ranked by severity and urgency, triage classification per alert (Monitor / Investigate / Escalate), systemic pattern identification where multiple models show correlated drift, and model risk register updates reflecting the current triage outcome for each alert.

Knowledge base

Model Registry (full inventory)

All production, development, and retired models — version history, approval status, owner, deployment scope, and scheduled review dates. The master record of the model estate.

Model Risk Policy (RAG)

Risk tiers, documentation standards, bias thresholds, drift trigger levels, end-of-life criteria. The policy framework applied in every model card, bias test, and drift triage output.

Model Performance Database

Monthly performance metrics for every production model — Gini, KS, PSI, CSI, approval rate, NPA rate. The longitudinal dataset that makes drift triage and trend analysis possible.

Bias Testing Dataset

Protected characteristic proxy variables and segment-level performance data. The input to every quarterly bias test. Must be maintained and updated as the portfolio mix evolves.

Validation Findings Archive

All Model Validation Agent AI findings — the independent assessment record that feeds the model risk register maintained by this agent.

Model Risk Knowledge

Pre-training knowledge of model risk management frameworks, Basel model risk principles, bias testing methodologies, and ML governance standards up to knowledge cutoff.

Hard guardrails

Will notRetrain, modify, or remediate any model. Model changes are the data science team's responsibility. This agent identifies what needs to change and documents it; the data science team implements the change.
Will notApprove a model for production. Model approval is a governance committee decision based on validation findings, bias test results, and risk register status — not an automated agent action.
Will notClose a bias finding without human confirmation that remediation has been completed and tested. Bias findings remain open in the risk register until the data science team has implemented and evidenced the remediation.

Known limitations

Bias testing detects statistical disparities — it does not determine whether a disparity constitutes illegal discrimination under Indian law. The legal determination requires qualified legal counsel and the Fair Lending AI's disparate impact analysis, not this agent's bias metric alone.Every bias finding above policy threshold should be reviewed jointly by this agent's model risk report, the Fair Lending AI's disparate impact analysis, and legal counsel before a remediation approach is determined.
Model card completeness depends on what the data science team documents. A model card cannot be more complete than the information provided to it. Undocumented training decisions, informal feature engineering choices, and unrecorded post-deployment adjustments create invisible gaps that this agent flags as missing but cannot fill.Build documentation requirements into the model development workflow — model card fields must be completed at each stage of the development process, not retrospectively assembled at validation time.
Drift triage is based on aggregate statistics, not root cause analysis. The agent identifies that drift has occurred and triages its severity; determining why drift occurred requires data science team investigation into the underlying portfolio and feature distribution changes.Every drift alert above "Monitor" severity should trigger a structured root cause investigation request to the data science team with a defined turnaround time before the next governance committee meeting.
Agent Profile · Model Risk Manager AI · LendingIQ · BengaluruLast updated April 2026 · For internal use

Important Reads

Learn more about how to deploy Model Risk Manager AI to your lending workflow.