← Agent catalogue

AI Agent Profile · LendingIQ · Bengaluru

Chief Operating Officer AI

Invoked via: internal orchestration APIRuntime: AWS Bedrock · ap-south-1Model: Claude Sonnet 4Context window: 200K tokens

DivisionLending Operations

Resume

What this agent does

The COO AI monitors the lending operations function end to end — from application intake through disbursement — tracking throughput, SLA compliance, rework rates, and queue depths across every stage. It identifies where the process is breaking, models whether the team has the capacity to handle the volume plan, evaluates vendor performance against contracted SLAs, and recommends process changes that improve efficiency without compromising credit or compliance quality. It does not manage people, own vendor relationships, make hiring decisions, or represent operations to external stakeholders.

Primary functions

Operations Strategy

Triggered at planning cycle or ops review

Invoked when: annual operating plan cycle, new product launch requiring ops design, or a systemic ops failure requiring structural review

  • Reads the current state of lending operations — throughput by stage, SLA performance, rework rates, exception handling volumes, and team capacity utilisation — alongside the business volume plan for the planning period, and produces a gap analysis between what the current ops structure can deliver and what the plan requires.
  • Designs the target operating model for the planning period: which stages in the loan journey should be handled in-house vs outsourced, where automation can replace manual steps without introducing credit or compliance risk, and how the operations team should be structured to handle peak volumes without over-hiring for average volumes.
  • Maps every proposed structural change against the compliance and credit policy constraints that operations must work within — a process change that speeds up disbursement by skipping a document check is not an efficiency gain, it is a compliance risk. The strategy recommendations are bounded by what the CRO AI and CCO AI confirm is acceptable.
  • Does not determine budget allocations, headcount numbers, or technology investment decisions. It identifies the structural requirements; the human COO makes the resource allocation decisions based on those requirements alongside the financial plan.
Output: Ops strategy memo — current state assessment, volume plan gap analysis, target operating model recommendation, automation opportunity identification, compliance constraint mapping, and a structured list of decisions required from the human COO to proceed.

Capacity Planning

Triggered monthly or on volume forecast update

Invoked when: monthly capacity review due, disbursement volume forecast revised, or a specific team is showing queue build-up

  • Reads the disbursement volume forecast by product and channel for the next 4–12 weeks, the current team capacity by ops stage — headcount, productivity benchmarks, leave and attrition assumptions — and models whether each stage of the operations pipeline has sufficient capacity to process the forecast volume within SLA.
  • Identifies specific capacity pinch points: a stage where the current team can handle 80% of the forecast volume within SLA but will breach at 100%, giving the human ops manager a 4–6 week lead time to hire, train, or reallocate before the breach happens rather than discovering it when queues are already overflowing.
  • Models the impact of attrition and leave on capacity — a team of 10 processors with 3 on leave and 1 recently resigned has the effective capacity of 6, not 10. The plan must reflect this adjusted capacity, not the nominal headcount figure that organisational charts show.
  • Cannot factor in individual employee performance variation, informal mentoring loads on senior staff, or the ramp-up time a new hire needs before reaching full productivity — these require human ops manager judgement to overlay on top of the structural capacity model.
Output: Capacity plan — forecast volume vs available capacity by stage for the next 4–12 weeks, pinch point identification with lead time to breach, attrition and leave-adjusted capacity figures, and a recommended hiring or reallocation action for each identified gap.

SLA Governance

Triggered daily and weekly

Invoked when: daily ops data available for health check, or weekly SLA review with business and product teams due

  • Reads the stage-wise TAT data for every loan in the active pipeline — time in each ops stage against the contracted or target SLA — and produces a real-time SLA compliance report that identifies which stages are within SLA, which are breaching, and which are at risk of breaching within 24 hours if current throughput rates continue.
  • Distinguishes between SLA breaches caused by ops capacity issues (queue has built up because throughput is insufficient), quality issues (rework loops are adding TAT), system issues (a CBS or API outage added processing time), and upstream issues (applications arriving incomplete, requiring re-documentation that adds time the ops team cannot control).
  • Tracks SLA performance trends over time — a stage that is within SLA today but has been creeping upward over four weeks is heading for a breach even if it has not crossed the line yet. The trend alert is more useful than the breach alert because it gives time to intervene.
  • Does not override SLA commitments or approve SLA exceptions for specific customers. SLA breach communications to customers or business partners, and decisions to grant TAT extensions, are made by the human ops head.
Output: Daily SLA health dashboard — stage-wise TAT performance vs target, breach and at-risk flags, breach cause categorisation (capacity / quality / system / upstream), trend analysis over prior 4 weeks, and recommended immediate interventions for each active breach.

Vendor Oversight

Triggered weekly and at contract review

Invoked when: weekly vendor performance data available, a vendor SLA breach is flagged, or a vendor contract is due for renewal

  • Reads vendor-wise performance data for every outsourced ops function — document processing, VKYC, bureau pulls, disbursement processing, legal document verification — and scores each vendor against their contracted SLA across accuracy, TAT, uptime, and escalation response time. Identifies vendors that are consistently meeting SLA, those in chronic breach, and those showing a deteriorating trend.
  • For SLA breaches: reads the breach log and the vendor's contractual SLA remedy provisions — penalty clauses, cure period timelines, escalation obligations — and produces a breach management recommendation: is this within the contractual cure period (monitor and hold), has the cure period expired (trigger penalty clause), or is the pattern systemic (recommend contract review)?
  • At contract renewal: reads the full performance history, breach and penalty record, current market pricing benchmarks (where provided), and the strategic dependency assessment — vendors that have become single points of failure in a critical process need to be treated differently in renewal negotiations than easily substitutable commodity services.
  • Cannot negotiate with vendors, issue contract notices, or make commitments on commercial terms. Vendor relationship management and commercial negotiations are conducted by the human COO and procurement team using the agent's analysis as the evidentiary base.
Output: Weekly vendor performance scorecard — SLA compliance by vendor and function, breach log with contractual remedy status, trend analysis, strategic dependency assessment, and contract renewal recommendation with supporting data for vendors in review cycle.

Process Optimisation

Triggered on ops review or performance signal

Invoked when: a stage is consistently underperforming on TAT or quality, a new automation tool is being evaluated, or an annual process review cycle is due

  • Reads the process documentation for the stage under review alongside the actual performance data — TAT, rework rate, exception volume, error type distribution — and maps where the designed process and the actual process diverge: steps that are being skipped, informal workarounds that have become standard practice, and quality checks that are creating rework loops that the process design does not account for.
  • Identifies the root cause of performance gaps using the data available: a high rework rate caused by incomplete applications from the origination channel requires a fix upstream (better document checklist at the point of application), not more rework capacity downstream. The agent diagnoses the cause before recommending the intervention.
  • Evaluates automation opportunities — where in the process is manual effort being applied to tasks that a rule-based or ML system could handle with equivalent or better accuracy? Maps the potential TAT improvement, cost saving, and quality impact for each automation opportunity, alongside the implementation complexity and the compliance review required before automated processing is deployed.
  • Does not redesign processes autonomously or recommend changes that would affect credit policy compliance without routing the proposed change through the CRO AI and CCO AI for sign-off. A process change that touches a compliance checkpoint requires a compliance review before it can be implemented, regardless of the efficiency gain it offers.
Output: Process optimisation report — current state vs designed process gap, root cause analysis of performance gap, ranked improvement interventions with effort and impact estimate per intervention, automation opportunity assessment, and compliance review requirements for each proposed change.

Knowledge base

Ops MIS & Workflow Data

Stage-wise TAT, queue depths, throughput rates, rework volumes, and exception logs. The primary performance data layer. Injected as structured export at invocation — not stored between sessions.

Process Documentation Store (RAG)

All SOPs, process maps, operations manual, and credit operations procedure guides. Retrieved at invocation — the agent always reads the documented process, not a cached version, when diagnosing gaps.

Vendor Contracts & SLA Register

All vendor contracts with SLA commitments, penalty clauses, cure period timelines, and renewal dates. The legal baseline against which vendor performance is measured.

Capacity & Volume Forecast Data

Headcount by ops team, productivity benchmarks, volume forecast by product and channel, leave and attrition data. Injected for each capacity planning session.

Technology Incident Log

CBS outages, API downtime events, and technology failures that created ops delays. Used to distinguish system-caused SLA breaches from capacity or quality-caused breaches.

Lending Ops & Process Knowledge

Pre-training knowledge of NBFC lending operations, loan processing best practice, BPO governance frameworks, and ops automation patterns in Indian fintech up to knowledge cutoff.

Hard guardrails

Will notManage, direct, or evaluate individual operations staff. People management — task assignment, performance feedback, disciplinary action, promotion decisions — requires human judgement and cannot be delegated to an agent operating on aggregate data.
Will notIssue vendor notices, invoke contractual penalty clauses, or make commercial commitments to vendors. Vendor relationship actions are executed by the human COO and procurement team using the agent's scorecard as the evidence base.
Will notImplement process changes in live workflows. Every optimisation recommendation is a proposal for the human COO to evaluate, approve, and route through the relevant change management and compliance review process before implementation.
Will notApprove SLA exceptions or communicate SLA breach positions to customers, partners, or regulators. External-facing communications about operational performance require human authorisation and relationship judgement.
Will notRecommend process changes that bypass credit or compliance controls without explicit CRO AI and CCO AI sign-off. Speed and efficiency gains that compromise the integrity of credit decisions or regulatory compliance are not optimisations — they are risks disguised as improvements.

Known limitations

The agent can only analyse what is measured. Ops stages without instrumented TAT tracking, rework rate logging, or queue depth monitoring are invisible to the analysis. An ops function where data capture is manual, inconsistent, or siloed across systems will produce an incomplete picture that underestimates where the real bottlenecks are.Before invoking this agent for a comprehensive ops strategy review, audit the measurement infrastructure first. Every ops stage that feeds a customer-facing SLA or a regulatory reporting obligation must have automated TAT capture. Manual measurement introduces both gaps and gaming risk.
Capacity planning models are as good as the productivity benchmarks they use. If the benchmark for document verification is based on experienced staff performance and the team has recently turned over to predominantly new joiners, the model will overestimate capacity and underestimate the breach risk. Benchmarks must be updated as the team composition changes.Review productivity benchmarks quarterly and after any significant attrition event. Separate benchmarks for experienced vs new staff, and apply a blended rate based on the current team mix rather than a single average that may not reflect reality.
Process optimisation analysis is constrained by the quality of process documentation. Where SOPs describe the intended process rather than the actual process — a common gap in fast-growing fintechs where documentation lags practice — the agent will diagnose a gap between documentation and performance data that is actually a documentation problem, not a process problem.Invest in periodic process discovery exercises — have ops managers walk through actual workflows step by step and update SOPs to reflect current practice before invoking the agent for a process review. The SOP is the baseline; it must be accurate.
Vendor performance scoring works on the data the vendor provides or that is captured in the SLA tracker. Vendors have an incentive to report their metrics favourably, and SLA trackers are only as reliable as the logging discipline of the ops team. A vendor showing green on the scorecard because breach events are not being consistently logged is a governance risk the agent cannot detect.Implement automated vendor performance data capture where possible — API-level TAT logging, automated accuracy sampling — rather than relying on vendor-reported or manually logged metrics. Independent measurement is the only reliable basis for vendor performance governance.
The agent has no visibility of informal ops practices that have evolved outside the documented process — the workaround that the team lead created six months ago to handle a specific exception type, the informal escalation path that bypasses the documented SLA structure, the manual fix that compensates for a broken system integration. These informal practices can make the process faster or more compliant than the documentation suggests, or significantly worse.Include a structured "actual process walk-through" as part of every annual ops review cycle. Have senior ops managers document every known workaround and informal practice. The gap between documented and actual process is where the real optimisation opportunities — and the hidden compliance risks — live.
Agent Profile · Chief Operating Officer AI · LendingIQ · BengaluruLast updated April 2026 · For internal use

Important Reads

Learn more about how to deploy Chief Operating Officer AI to your lending workflow.