← Agent catalogue

AI Agent Profile · LendingIQ · Bengaluru

Fair Lending Agent AI

Invoked via: model governance & compliance pipelineRuntime: AWS Bedrock · ap-south-1Model: Claude Sonnet 4Context window: 200K tokens

DivisionCompliance

Resume

What this agent does

The Fair Lending Agent AI tests every production credit model for disparate impact on protected groups, identifies input variables that function as proxies for protected characteristics, and produces the bias reporting that the compliance and model risk functions need to demonstrate that LendingIQ's AI-driven credit decisions are fair. It detects statistical disparities and frames the remediation options — it does not determine whether a disparity constitutes illegal discrimination, and it cannot remediate a model. Legal determination requires qualified counsel; model remediation requires the data science team.

Primary functions

Disparate Impact Testing

Quarterly for all production models

Invoked when: quarterly fair lending review, new model submission, or a complaint referencing discriminatory outcomes triggers an ad-hoc analysis

  • Computes approval rate, pricing rate (interest rate offered), and denial rate for each identifiable demographic segment in the applicant population — using direct demographic data where available and geographic proxy data (PIN code to demographic mapping) where it is not. Applies the 80% rule (adverse impact ratio): a group whose approval rate is less than 80% of the highest-approved group's rate has a prima facie disparate impact.
  • Tests for disparity across multiple dimensions simultaneously: gender (where collected), geographic proxy (state, district, urban/rural), income band, and language of application — because a model may show no gender disparity but material geographic disparity that correlates with other protected characteristics.
  • Applies statistical significance testing to each disparity finding — a disparity in a small sample may be within random variation rather than a systematic model effect. Only statistically significant disparities at the configured confidence level are elevated to findings requiring review; borderline results are noted as "monitor" with the sample size limitation stated.
Output: Disparate impact report — approval and denial rates by demographic segment, adverse impact ratios, statistical significance assessment per finding, segment-level disparity map, and a priority ranking of findings by severity and statistical confidence.

Proxy Variable Detection

Every model submission and quarterly scan

Invoked when: new model is submitted for validation, or quarterly proxy scan of all production models is due

  • Analyses the correlation between each model input feature and the protected characteristic proxies in the demographic dataset — identifying features that are highly correlated with religion, caste, gender, or geography in ways that could cause the model to effectively use these characteristics in its decisions even though they are not explicit inputs.
  • Common proxy patterns flagged: PIN code features correlated with religious community concentration, employer name or industry correlated with caste-associated occupations, language of application correlated with regional demographics, and bank branch location correlated with rural/urban socioeconomic proxies.
  • For each identified proxy: reports the correlation strength, the protected characteristic it proxies for, the percentage of model decisions where the feature is a material driver, and the options for addressing the proxy — feature removal, re-binning to reduce proxy correlation, or application of a fairness constraint that limits the feature's influence where it functions as a proxy.
Output: Proxy variable report — feature-by-feature proxy correlation analysis, high-risk features ranked by correlation strength and decision influence, protected characteristic each feature proxies for, and remediation options for each high-risk feature for the data science team to evaluate.

Bias Reporting

Quarterly board report and on-demand

Invoked when: quarterly board compliance report due, or a specific bias finding requires a structured report for the compliance committee or legal counsel

  • Synthesises all active disparate impact findings, proxy variable flags, and remediation status into a board-level fair lending compliance report — the overall fair lending risk rating for the credit model portfolio, the most material findings and their current status, the remediation actions underway, and the trend in fair lending metrics over the last four quarters.
  • Produces model-specific bias briefs for findings that require legal counsel review — structured to give counsel the statistical finding, the legal standard it may engage, the business necessity argument that might be available, and the less discriminatory alternatives the data science team has identified. Does not provide the legal conclusion; provides the factual brief that counsel needs to reach one.
  • Tracks remediation outcomes — when a model is retrained or a proxy variable is removed in response to a fair lending finding, re-runs the disparate impact test on the updated model and reports whether the disparity has been reduced to within the policy threshold.
Output: Quarterly fair lending board report — portfolio fair lending risk rating, active findings with status and remediation progress, trend analysis over four quarters, model-specific bias briefs for findings requiring legal review, and post-remediation disparity test results for updated models.

Hard guardrails

Will notDetermine that a detected disparity constitutes illegal discrimination. Disparate impact is a statistical finding; illegal discrimination is a legal conclusion. All material disparity findings require legal counsel review before any regulatory disclosure, litigation response, or public statement about the finding.
Will notRemediate a model. Fair lending findings are directed to the data science team for model remediation with the data science team's technical expertise and the model risk governance committee's approval. The agent identifies the issue and frames the options; it does not implement the fix.
Will notUse actual protected characteristic data (religion, caste) in its analysis — only geographic and behavioural proxies that are ethically permissible inputs to disparate impact analysis. Where actual protected characteristic data is not collected (which is the appropriate practice), the analysis uses demographic proxies with stated limitations.

Known limitations

Proxy-based demographic analysis has inherent measurement error. A PIN code correlated with a religious community is not a perfect demographic classifier — individuals within that PIN code are not uniformly members of that community. The disparity measurement carries the imprecision of the proxy, which must be stated clearly in any report that is used for legal or regulatory purposes.Commission a direct demographic data collection study on a representative sample of LendingIQ applicants — with appropriate consent and data minimisation safeguards — to calibrate the accuracy of the geographic proxy approach and to provide direct-demographic validation for the proxy-based findings.
The 80% rule is a regulatory standard of reference, not a hard legal threshold in Indian law. Indian courts and regulators have not established the same bright-line disparate impact standard as US fair lending law. Applying the 80% rule creates a conservative monitoring framework, but findings below the 80% threshold are not guaranteed to be legally problematic, and findings above it are not guaranteed to be legally safe.Brief the human CCO and legal counsel on the limitations of the 80% rule as applied in the Indian regulatory context at the start of each annual fair lending cycle — so that all stakeholders understand it as a monitoring heuristic, not a legal safe harbour.
Remediation effectiveness depends on the data science team's response. A bias finding that is logged and not remediated within a reasonable timeframe accumulates as a governance risk — the model is operating with a known disparity problem. The fair lending agent tracks remediation status but cannot compel the data science team to act.Build remediation timelines into the fair lending governance framework — any finding rated "material" must have a remediation plan approved by the governance committee within 30 days and implemented within 90 days. Overdue remediations are escalated to the Board Risk Committee as open governance items.
Agent Profile · Fair Lending Agent AI · LendingIQ · BengaluruLast updated April 2026 · For internal use

Important Reads

Learn more about how to deploy Fair Lending Agent AI to your lending workflow.