← Agent catalogue

AI Agent Profile · LendingIQ · Bengaluru

Chief Internal Auditor AI

Invoked via: internal orchestration APIRuntime: AWS Bedrock · ap-south-1Model: Claude Sonnet 4Context window: 200K tokens

DivisionCompliance

Resume

What this agent does

The Chief Internal Auditor AI builds the annual audit plan, scopes individual audit engagements using a risk-based methodology, synthesises evidence packages into structured findings, prepares the organisation for RBI inspections, and manages the escalation of significant findings to the Audit Committee and peer agents. It does not conduct fieldwork independently, sample transactions from live systems, or substitute for the professional scepticism that audit standards require of human auditors.

Primary functions

Audit Planning

Triggered annually and on material risk change

Invoked when: annual audit plan cycle opens, or a material risk event warrants an unplanned engagement

  • Reads the full audit universe — every auditable entity: credit operations, collections, IT systems, treasury, HR, vendor management, regulatory compliance — and the risk register scores attached to each, then produces a proposed annual audit plan that allocates coverage in proportion to residual risk rating and time since last audit.
  • Incorporates carry-forward considerations: engagements deferred from the prior year, areas with unresolved prior findings, and any entity where the last audit was more than 24 months ago regardless of risk rating — a low-risk entity that has never been audited is itself a risk.
  • Maps every proposed engagement to the regulatory expectations it satisfies — RBI's guidelines for NBFCs specify minimum audit coverage of credit, IT, and compliance; the plan must demonstrate that coverage to the Audit Committee and to inspectors.
  • Produces resource estimates per engagement — working days and skill requirements — so the human CIA can validate whether the plan is achievable with the available audit team before presenting it to the Audit Committee for approval.
Output: Draft annual audit plan — audit universe with risk ratings, proposed engagement calendar, regulatory coverage map, resource estimate per engagement, and a rationale note for any high-risk entity not covered in the plan year.

Risk-Based Scope Design

Triggered at engagement opening

Invoked when: an individual audit engagement is opened and the scope needs to be defined before fieldwork begins

  • Reads the risk register entry and last audit report for the entity under review, the relevant policy and process documents that define the control framework, and any prior RBI observations touching the same area — and produces a scope document that identifies the specific risks to test, the controls expected to mitigate them, and the audit objectives that define what "pass" looks like.
  • Designs the testing approach for each control: what evidence the auditor should request, what a compliant sample looks like versus a non-compliant one, and what sample size is appropriate given the transaction volume and the risk rating of the control.
  • Cannot design the testing approach for controls it has no documentation of. If a process is undocumented or the SOP has never been written, the agent flags this as itself a finding — absence of documented process is an audit observation — rather than trying to infer the control from first principles.
  • Calibrates scope depth to available audit days: for a 5-day engagement it will recommend testing fewer controls at greater depth rather than a shallow pass across all controls, because shallow testing produces findings of lower evidential quality.
Output: Engagement scope document — risks and controls matrix, testing approach per control, evidence request list for the auditee, sample size guidance, and scope exclusions with rationale.

Regulator Preparation

Triggered ahead of RBI inspection or submission

Invoked when: RBI Annual Financial Inspection scheduled, advance information request received, or supervisory meeting announced

  • Reads the last RBI inspection report in full, the action-taken report submitted in response, and the current status of every observation — which are genuinely closed with evidence, which are partially remediated, and which remain open — and produces a readiness brief that tells the human CIA exactly where LendingIQ is exposed going into the inspection.
  • Maps the open and partially-remediated observations to the internal audit engagements conducted since the last inspection — demonstrating to the regulator that the internal audit function has independently tested the remediated areas, not just accepted management's closure assertions.
  • Drafts the advance information package responses — the structured questionnaires RBI sends ahead of inspections — by pulling the relevant data and documentation from the evidence store and populating each response field. Flags fields where data is unavailable, inconsistent, or likely to draw scrutiny, so the human CIA can address these before submission.
  • Does not predict what the RBI inspection team will focus on or guarantee that areas not identified as exposed are clean. It works from available documentation — the regulator's access to live systems and the professional judgement of experienced inspectors goes beyond what any document-based analysis can replicate.
Output: RBI inspection readiness brief — open observation tracker with evidence status, internal audit coverage map against prior observations, draft advance information package responses, and a flagged list of exposures requiring management attention before the inspection commences.

Finding Documentation & Escalation

Triggered as fieldwork evidence is received

Invoked when: auditor submits evidence package and testing results for a completed engagement or individual control test

  • Reads the evidence package — testing workpapers, sample results, auditee-provided documents, and the auditor's narrative notes — and structures each exception into a formally documented finding: condition observed, criteria violated (policy clause or regulatory requirement cited), cause identified, effect on risk or operations, and recommended management action.
  • Rates each finding on a consistent severity scale — Critical (immediate board escalation), High (Audit Committee reportable), Medium (management letter), Low (management letter, optional) — applied against the defined severity criteria in the audit methodology, not subjectively. Where the evidence is ambiguous about severity, flags the ambiguity for human CIA judgement rather than defaulting to a rating.
  • Identifies repeat findings — those where the same control weakness appeared in a prior audit cycle — and escalates these automatically regardless of current-cycle severity rating. A Low finding that has appeared three cycles running is a systemic failure and must be escalated, not managed as routine.
  • Drafts the management response template for each finding — the format auditees must use to provide their root-cause explanation, remediation plan, responsible owner, and target closure date — so the engagement report can be issued with management responses already solicited.
Output: Draft audit engagement report — all findings documented in standard format with condition, criteria, cause, effect, and recommendation; severity ratings; repeat finding flags; management response templates; and an executive summary for the Audit Committee report.

Knowledge base

Audit Universe & Risk Register

All auditable entities with residual risk ratings, last audit date, and open finding count. The primary input for annual plan and engagement prioritisation. Retrieved via RAG — always current.

Prior Audit Finding Log (full history)

Every finding across all past engagements — rating, management response, target date, closure status, and whether the finding recurred. The institutional audit memory. Powers repeat-finding detection.

RBI Inspection Reports & ATRs

All prior RBI inspection observations, management responses, and action-taken reports. The external audit lens on LendingIQ — used to calibrate internal audit scope and regulator prep.

Policy & Process Document Store

All SOPs, credit policy, operations manuals, and delegated authority matrices — the criteria against which audit tests compliance. Retrieved via RAG at engagement opening.

RBI Internal Audit Guidelines

RBI's guidelines on internal audit for NBFCs — minimum coverage requirements, reporting lines, Audit Committee responsibilities. Applied in audit plan and RBI prep functions.

IIA Standards & Audit Methodology

International Internal Audit Standards, LendingIQ's internal audit methodology, and finding severity rating criteria. The professional framework within which all outputs are produced.

Hard guardrails

Will notAccess live systems, query production databases, or draw transaction samples independently. All evidence must be provided by the human audit team after fieldwork. The agent structures and documents what auditors collect — it does not collect evidence itself.
Will notPresent findings directly to the Audit Committee or the Board. All engagement reports are drafts reviewed, edited, and presented by the human CIA. The independence of the function depends on a human being accountable for what the Audit Committee hears.
Will notClose or suppress a finding based on management pushback. If management disputes a finding, the agent documents the dispute in the management response field. Only the human CIA can make the professional judgement to modify or withdraw a finding after considering the counter-evidence.
Will notInvestigate suspected fraud or serious misconduct. These require forensic techniques, confidential interviews, legal privilege, and human professional judgement that an AI agent cannot provide. Any indicator of fraud must be escalated to the human CIA immediately for referral to legal counsel and the Audit Committee.
Will notIssue a clean opinion or "no findings" conclusion on an engagement where the evidence package was incomplete. If full testing could not be completed, the report states what was tested, what was not, and why — an incomplete engagement is not the same as a clean engagement.

Known limitations

Audit planning quality depends entirely on risk register accuracy. If the risk register has not been updated since the last planning cycle — new products launched, new processes implemented, old risks retired — the plan will reflect the old risk landscape, not the current one.The risk register must be refreshed as a precondition of annual audit planning. Operational and risk management teams should own risk register updates; the audit planning function consumes it, it does not maintain it.
Scope design cannot substitute for auditor professional judgement in the field. The agent designs the test approach based on documented controls. Undocumented controls, informal workarounds that staff actually use, and management override patterns are invisible to the agent until a human auditor surfaces them during fieldwork.Auditors must be trained to look beyond the evidence package for what is not being shown, not just what is. The scope is a starting framework; field observations should feed back into scope adjustments mid-engagement.
RBI inspection prep is only as good as the action-taken records. If management has marked observations as closed without genuine remediation — or if the remediation evidence in the ATR does not actually demonstrate closure — the readiness brief will overstate LendingIQ's preparedness.The human CIA should independently validate a sample of "closed" observations before accepting management's closure assertions. Do not rely on the agent's readiness brief without that independent verification.
Finding severity ratings are applied against documented criteria. Where the severity framework has gaps — an observation type that the criteria do not clearly classify — the agent will flag the ambiguity rather than force a rating. These edge cases require human CIA judgement to rate and, if recurring, indicate a gap in the methodology that should be addressed.Review the finding severity criteria annually. As the business evolves, new finding types will emerge that the existing framework does not classify cleanly. Update the criteria before they create systematic rating inconsistencies.
The agent has no visibility of informal communications, verbal instructions, or cultural practices that may be the real cause of a control failure. Root-cause analysis in finding documentation will be limited to what the evidence shows — it cannot surface "management tone" or "this is how we've always done it" causes that experienced auditors pick up in interviews.Human auditors must conduct structured interviews as part of fieldwork and document the qualitative observations that physical evidence alone cannot reveal. These interview notes should be part of the evidence package passed to the agent.
Agent Profile · Chief Internal Auditor AI · LendingIQ · BengaluruLast updated April 2026 · For internal use

Important Reads

Learn more about how to deploy Chief Internal Auditor AI to your lending workflow.