Use case #0003

Feedback loops: how QC AI improves origination and ops agent accuracy

A quality control process that catches errors and corrects them before disbursement is valuable. A quality control process that catches errors, corrects them, and then uses the error data to reduce the probability of the same errors appearing in future files is compoundingly valuable — because the cost of catching and correcting an error is always higher than the cost of not making the error in the first place. The Quality Control Agent AI closes the loop between QC output and origination/ops input: every error is tagged, every tag contributes to an individual accuracy score for the RM or ops agent who made the error, every accuracy score is shared weekly with the individual and their manager, and every consistent error pattern triggers a targeted intervention that addresses the root cause — not the symptom.

A quality control process that catches errors and corrects them before disbursement is valuable. A quality control process that catches errors, corrects them, and then uses the error data to reduce the probability of the same errors appearing in future files is compoundingly valuable — because the cost of catching and correcting an error is always higher than the cost of not making the error in the first place. The Quality Control Agent AI closes the loop between QC output and origination/ops input: every error is tagged, every tag contributes to an individual accuracy score for the RM or ops agent who made the error, every accuracy score is shared weekly with the individual and their manager, and every consistent error pattern triggers a targeted intervention that addresses the root cause — not the symptom.

The feedback loop structure: from QC tag to individual accuracy to targeted intervention

The feedback loop has four stages. First, the QC AI tags every error with the responsible party — the RM who submitted the file with the bank statement gap, the ops agent who set up the wrong NACH account. This attribution is not punitive — it is the mechanism by which the loop closes. Without attribution, aggregate error rates can fall without anyone knowing whose behaviour needs to change. With attribution, the institution can identify that 40% of A03 (bank statement gap) errors are generated by 3 of its 28 RMs — and target those 3 with a specific intervention rather than re-training all 28.

Second, the weekly accuracy report gives every RM and ops agent their personal error rate for the trailing 30 days — broken down by error category, with the specific errors listed. An RM who has a 28% file error rate in October but a 14% error rate in November — because they received coaching on bank statement date range specification after the QC system flagged their A03 pattern — can see the improvement in their own data. The feedback is personal and specific, not generic.

Third, consistent error patterns trigger process interventions — changes to the onboarding flow, the RM checklist, the CBS setup template, or the training programme that address the root cause rather than the individual error. When the QC AI finds that A03 (bank statement gap) errors are declining in digital onboarding applications but not in branch-originated applications, the root cause is the branch's document request process — not the RM's individual knowledge.

Fourth, the institution-wide error rate is reported monthly to the Board Operations Committee as a quality governance metric — alongside the specific interventions taken and their measured impact. Quality is not a QC team metric; it is a Board-level operational risk metric.

"An error rate that appears in a weekly individual report is an improvement signal. An error rate that appears only in a monthly QC summary report is a retrospective observation. The former changes behaviour. The latter documents it."

The feedback dashboard: October-to-November improvement across 3 feedback loops

The individual RM accuracy report: Kiran M. · November 2025

Error typeOctober countNovember countTrendRoot cause and intervention
A03 · Bank statement gap4 of 9 files (44%)1 of 12 files (8%)−36ppDigital onboarding date range fix (Loop 1) removed most instances. Residual: 1 branch file. Kiran coached on branch document request specification.
C02 · NACH mismatch2 of 9 files (22%)0 of 12 files (0%)−22ppAuto-population from AA (Loop 2) eliminated manual entry errors entirely for Kiran. System fix, not coaching.
B01 · EC expired1 of 9 files (11%)2 of 12 files (17%)+6pp2 LAP files with 5-month+ TAT · Loop 3 not yet implemented · Kiran specifically alerted: any LAP file open >4 months must have EC refresh initiated now.
B03 · Valuer not on panel01 of 12 files (8%)New errorValuer Ravi Associates removed from panel Oct 1 · Kiran used Oct valuation (obtained before removal) for a Nov sanction · Kiran given updated panel list · Training: panel check at sanction, not only at application.
Overall error rate7 of 9 files (77.8%)4 of 12 files (33.3%)−44.5ppLarge improvement driven by systemic fixes (Loops 1 and 2). Remaining errors: B01 (process gap) and B03 (new error — panel update knowledge). Kiran is a new RM with a rapidly improving profile.
−3.6ppInstitution error rate improvement Oct→Nov — 23.4% → 19.8% · From 2 implemented feedback loops · Loop 3 pending Dec 15
−44.5ppKiran M.'s personal error rate improvement — 77.8% → 33.3% · New RM · Rapid improvement from system fixes eliminating manual entry errors
−8.4ppC02 (NACH mismatch) reduction — auto-population from AA eliminated manual transcription errors · First-EMI bounce down 34% in November cohort
MonthlyBoard Operations Committee quality report — error rate, interventions, impact measured · Quality as a board-level operational risk metric

Kiran M.'s 77.8% error rate in October was not evidence of a poor RM — it was evidence that two of the most common errors in LAP origination were being caused by system design rather than by Kiran's skill

The A03 errors that accounted for 4 of Kiran's 7 October errors were caused by the digital onboarding date range instruction — an error that was equally likely to appear in any other RM's files at the same rate. The C02 errors were caused by a manual NACH entry process that was wrong for Kiran at the same rate it was wrong for every other RM who used it. When the feedback loop attribution analysis showed that A03 and C02 errors were distributed across all RMs rather than concentrated in Kiran, the QC AI correctly identified these as systemic errors — caused by the process, not the individual. The system fixes reduced Kiran's error rate from 77.8% to 33.3% in one month without a single coaching session on those two error types. The remaining errors (B01 EC expiry and B03 panel valuer) are genuinely Kiran-specific knowledge gaps — and those are the ones that received coaching. The feedback loop's first function is not training — it is attribution: distinguishing individual knowledge gaps from systemic process failures, and directing the right intervention at the right cause.

← Back to Quality Control Agent AI