AI Agent Profile · LendingIQ · Bengaluru
Fair Lending Agent AI
DivisionCompliance
Resume
What this agent does
The Fair Lending Agent AI tests every production credit model for disparate impact on protected groups, identifies input variables that function as proxies for protected characteristics, and produces the bias reporting that the compliance and model risk functions need to demonstrate that LendingIQ's AI-driven credit decisions are fair. It detects statistical disparities and frames the remediation options — it does not determine whether a disparity constitutes illegal discrimination, and it cannot remediate a model. Legal determination requires qualified counsel; model remediation requires the data science team.
Primary functions
Disparate Impact Testing
Quarterly for all production modelsInvoked when: quarterly fair lending review, new model submission, or a complaint referencing discriminatory outcomes triggers an ad-hoc analysis
- Computes approval rate, pricing rate (interest rate offered), and denial rate for each identifiable demographic segment in the applicant population — using direct demographic data where available and geographic proxy data (PIN code to demographic mapping) where it is not. Applies the 80% rule (adverse impact ratio): a group whose approval rate is less than 80% of the highest-approved group's rate has a prima facie disparate impact.
- Tests for disparity across multiple dimensions simultaneously: gender (where collected), geographic proxy (state, district, urban/rural), income band, and language of application — because a model may show no gender disparity but material geographic disparity that correlates with other protected characteristics.
- Applies statistical significance testing to each disparity finding — a disparity in a small sample may be within random variation rather than a systematic model effect. Only statistically significant disparities at the configured confidence level are elevated to findings requiring review; borderline results are noted as "monitor" with the sample size limitation stated.
Proxy Variable Detection
Every model submission and quarterly scanInvoked when: new model is submitted for validation, or quarterly proxy scan of all production models is due
- Analyses the correlation between each model input feature and the protected characteristic proxies in the demographic dataset — identifying features that are highly correlated with religion, caste, gender, or geography in ways that could cause the model to effectively use these characteristics in its decisions even though they are not explicit inputs.
- Common proxy patterns flagged: PIN code features correlated with religious community concentration, employer name or industry correlated with caste-associated occupations, language of application correlated with regional demographics, and bank branch location correlated with rural/urban socioeconomic proxies.
- For each identified proxy: reports the correlation strength, the protected characteristic it proxies for, the percentage of model decisions where the feature is a material driver, and the options for addressing the proxy — feature removal, re-binning to reduce proxy correlation, or application of a fairness constraint that limits the feature's influence where it functions as a proxy.
Bias Reporting
Quarterly board report and on-demandInvoked when: quarterly board compliance report due, or a specific bias finding requires a structured report for the compliance committee or legal counsel
- Synthesises all active disparate impact findings, proxy variable flags, and remediation status into a board-level fair lending compliance report — the overall fair lending risk rating for the credit model portfolio, the most material findings and their current status, the remediation actions underway, and the trend in fair lending metrics over the last four quarters.
- Produces model-specific bias briefs for findings that require legal counsel review — structured to give counsel the statistical finding, the legal standard it may engage, the business necessity argument that might be available, and the less discriminatory alternatives the data science team has identified. Does not provide the legal conclusion; provides the factual brief that counsel needs to reach one.
- Tracks remediation outcomes — when a model is retrained or a proxy variable is removed in response to a fair lending finding, re-runs the disparate impact test on the updated model and reports whether the disparity has been reduced to within the policy threshold.
Hard guardrails
Known limitations
Important Reads
Learn more about how to deploy Fair Lending Agent AI to your lending workflow.
