← Agent catalogue

AI Agent Profile · LendingIQ · Agent #81 · QOA

Onboarding Quality Agent AI

Function: Onboarding QC ReviewerInvoked via: file submission for credit review · daily QC batch · pre-disbursement checkRuntime: AWS Bedrock · ap-south-1Model: Claude Sonnet 4Context window: 200K tokens

DivisionOnboarding

Resume

What this agent does

The Onboarding Quality Agent AI audits every application file at the point of submission for credit review — checking that all required documents are present, that document quality meets verification standards, that mandatory data fields are populated and internally consistent, and that the file is complete before it enters the credit assessment queue. It replaces the manual onboarding QC reviewer with a 100% file audit capability that catches errors at the onboarding stage rather than at disbursement, where corrections are far more costly.

Primary functions

File Completeness Audit

Per file · 100% coverage · 2-hour turnaround

Invoked when: a file is submitted for credit review — QC audit is completed within 2 hours of submission

  • Checks every submitted file against the product-specific document checklist — the list of mandatory and conditional documents required for each loan product. Mandatory documents (identity, address, income, bank statement) must be present and legible for every application in that product category. Conditional documents (co-applicant KYC if a co-applicant is named, property documents if the product is secured, business documents if the applicant is self-employed) must be present if the application condition that triggers them is met. Missing mandatory documents are a hard QC fail; missing conditional documents where the triggering condition is present are also a hard QC fail.
  • Assesses document quality against the minimum legibility and currency standards — a document that is present but illegible (blurred scan, poor lighting, cut-off edges) fails the QC in the same way as a missing document, because the KYC Verification Agent AI cannot verify an illegible document. Currency checks confirm that the document is within its validity period — an Aadhaar card that is expired or an income document that is more than 3 months old fails the currency check for the applicable document type.
  • Cross-checks the data fields within the file for internal consistency — the name on the PAN must match the name on the Aadhaar (allowing for standard variations in transliteration); the address on the utility bill must be in the same city as the application-stated address; the income declared on the application form must be consistent with the income shown on the salary slip within a reasonable tolerance. Internal inconsistencies that are within the tolerance range are flagged as advisory; those beyond the tolerance range are a hard QC fail.
Output: QC determination per file — pass or fail, with the specific error list if fail. Error category tags applied per error — used for pattern analysis. QC pass files released to credit assessment queue. QC fail files returned to operations team with correction instructions and a resubmission deadline.

Error Tagging

Per QC fail · categorised error list

Invoked when: a file fails the QC audit — error tags are assigned to each failure before the correction instructions are dispatched

  • Tags every QC failure with a specific error category from the standardised error taxonomy — the 15 most common error categories that cover the majority of QC failures: missing mandatory document, expired document, illegible document, name mismatch (minor variation), name mismatch (significant variation), address mismatch, income document outside currency window, incomplete bank statement (missing months), missing co-applicant KYC, missing guarantor declaration, application form incomplete (specific fields), signature missing, photograph quality insufficient, and document fraud signal (referred to Fraud Detection Agent AI). Each error is tagged to exactly one category; where a single document fails on multiple grounds (expired and illegible), each failure is tagged separately.
  • Assigns a correction instruction to each error tag — the specific action the operations team must take to resolve the error. The correction instruction is borrower-facing where the borrower must provide a new or corrected document, and internal where the error is a data entry issue that can be corrected in the LOS without borrower contact. Borrower-facing corrections include a script for the operations team to use when contacting the borrower — ensuring that the borrower receives a clear, consistent explanation of what is needed and why.
  • Sets a resubmission deadline for each QC fail — 48 hours for errors that require the borrower to provide a corrected document, and 24 hours for internal data correction errors. Files that are not resubmitted within the deadline are escalated to the operations head and included in the Onboarding SLA Agent AI's TAT monitoring as a QC-hold delay.
Output: Error tag list per failed file — error category, specific field or document, correction instruction, and resubmission deadline. Borrower-facing correction requests dispatched by the operations team using the provided script. Error tags stored in the application record for pattern analysis.

Rejection Reason Logging

Per rejected application · borrower journey feedback

Invoked when: an application is declined at QC or credit stage — rejection reason is logged and the borrower feedback loop is triggered

  • Logs the specific rejection reason for every application that is declined — whether at the QC stage (file rejected for failure to correct errors within the resubmission deadline) or at the credit stage (file passed QC but declined on credit grounds). The rejection reason log is the dataset that enables the borrower journey improvement cycle — identifying which rejection reasons are most frequent, whether they are concentrated in specific channels or geographies, and whether the rejection rate is improving or deteriorating over time.
  • Feeds the rejection reason data back into the onboarding process — where a specific error category is responsible for more than 20% of QC failures in a week, the agent flags the pattern to the operations head and the onboarding UX team. A systematic error pattern indicates a problem in the borrower-facing onboarding flow (borrowers are consistently submitting the wrong document type) or in the operations team's onboarding guidance (borrowers are being incorrectly briefed on the document requirements). The fix is a process change, not more error correction.
  • Ensures that every declined applicant receives a clear rejection reason communication — the specific ground for decline and, where relevant, what the applicant would need to do differently to reapply. The rejection communication is dispatched by the operations team using the agent's logged rejection reason as the basis; borrowers who do not receive a rejection reason have no path to remediation, which increases complaints and regulatory exposure.
Output: Rejection reason log — specific reason, stage of rejection (QC or credit), channel, geography, and date. Weekly rejection reason pattern analysis — top 5 rejection reasons by volume and trend. Pattern flags for operations head where a single category exceeds 20% of weekly rejections. Rejection communication basis provided to operations team for borrower notification.

Knowledge base

Product-Specific Document Checklist

The mandatory and conditional document requirements for each loan product — the primary reference for the completeness audit. Maintained by the operations head and updated when product requirements change.

Document Quality and Currency Standards

The minimum legibility standards and maximum document age for each document type — the basis for the quality and currency checks in the completeness audit.

Error Taxonomy — 15 Standard Error Categories

The standardised error categories used for error tagging and pattern analysis. Updated when new error types emerge from the QC data that are not captured by the existing taxonomy.

LOS — Application File and Document Repository

The source of the application data and documents — the file that is audited. Includes the KYC Verification Agent AI's verification outcomes, which are cross-referenced in the QC check.

Onboarding SLA Agent AI — TAT Integration

QC hold times fed into the overall application TAT monitoring — enabling the SLA Agent to include QC delays in its bottleneck detection and escalation workflow.

Pre-Training — Onboarding Quality Control Knowledge

File completeness audit methodology, document quality standards for Indian financial services, and QC process design best practices up to knowledge cutoff.

Hard guardrails

Will notMake any credit decision. A QC pass means the file is complete — it is not a credit approval or a recommendation to approve. Credit decisions are made by the Credit Decision Agent AI based on the credit policy, not by the QC process.
Will notOverride a KYC Verification Agent AI failure outcome. A document that has failed KYC verification fails QC regardless of the QC agent's own assessment of the document. KYC and QC are complementary checks; KYC verification outcomes are incorporated into QC results, not overridden by them.
Will notTag a fraud signal as a standard QC error. Document fraud signals are escalated directly to the Fraud Detection Agent AI — they are not processed through the standard correction instruction workflow, which would allow the borrower to substitute a corrected document. A fraud signal puts the application on hold pending fraud review.
Will notRelease a QC-failed file to the credit assessment queue before the errors are corrected and the file resubmitted. The QC gate is a hard gate — files do not proceed to credit review in a partially-complete or error-flagged state.

Known limitations

The QC audit checks whether required documents are present, legible, and internally consistent — it cannot verify whether the documents are genuine. A professionally produced fraudulent document that meets all the quality and consistency standards will pass the QC audit. Document fraud detection is the responsibility of the Fraud Detection Agent AI, which operates on different signals (cross-application patterns, metadata analysis, third-party verification) than the QC completeness check.Ensure that the Fraud Detection Agent AI runs in parallel with the QC process — not sequentially — so that fraud signals are available at the same time as QC results. A file that passes QC and fraud checks simultaneously can proceed to credit review without delay; a sequential process would require two separate review cycles.
The document currency standards (maximum age of documents) are product-specific and may not be uniformly applied where the operations team has historically accepted documents outside the standard currency window for specific borrower segments. The QC agent applies the formal standard; it cannot account for informal practices that the operations team may consider valid exceptions.Conduct a one-time review of the formal document currency standards against the operations team's actual practice before the QC agent is activated — resolving any discrepancies between the formal standard and the informal practice before the agent begins flagging long-accepted documents as QC failures.
Agent Profile · Onboarding Quality Agent AI · LendingIQ · Agent #81Last updated April 2026 · For internal use

Important Reads

Learn more about how to deploy Onboarding Quality Agent AI to your lending workflow.