From Manual Diligence to AI-Accelerated ODD: Cutting Review Cycles by 40-50% and Unlocking a Year of Capacity

Case study 2026-01-21

ODD Solutions

Client Profile

A global investment management firm conducting operational due diligence (ODD) to support manager onboarding, investment approvals, and ongoing oversight. The team evaluates on average 15 new funds/managers per year and is preparing for significantly expanded coverage as its universe continues to grow.

Business Challenge

The client had already adopted CENTRL as a platform (moving away from fully manual workflows), but diligence timelines remained constrained by:

  • Manual review burden: Analysts still had to read extensive free-text responses and attachments to identify exceptions, validate controls, and extract key facts.

  • Capacity limits: The team lacked bandwidth and sometimes relied on internal providers to support the diligence process.

  • Time-sensitive investment decisions: When an investment window is short, delays in ODD approvals can slow execution.

  • Scale pressure: With 300+ funds expected to be in scope in the near future, the team needed a workflow that could scale without adding proportional headcount.

The goal shifted from “digitizing diligence” to leveraging AI to compress cycle times, reduce manual work, and improve throughput.

Solution

The team expanded the use of CENTRL’s AI modules to automate the highest-friction parts of the diligence workflow:

  1. Smart Evaluate: “A Virtual Analyst” for Review

    Instead of manually reviewing every response and document, the team configured criteria with Smart Evaluate to automatically surface risks, inconsistencies, and areas requiring scrutiny, effectively creating a virtual analyst to triage what matters.

  2. Auto Populate Feature: Auto-Filling from Manager DDQs + Disparate Documents

    To reduce repetitive work, the client updated their DDQs toward more targeted yes/no and structured questions aligned to typical manager DDQs. They then used CENTRL’s Smart Populate module to ingest the manager’s DDQ and automatically fill what could be sourced, sending only the remaining questionnaire gaps back to the manager.

    • Example outcome: out of ~200 questions, CENTRL auto-populated ~80-110, leaving ~90-120 for manager completion, which consisted of unique per-manager data typically not found in standardized answers

    • Net: ~50%+ of targeted questions completed automatically before the manager ever touches the questionnaire

  3. Smart Summary: Faster Write-Ups and Internal Prep

    Smart Summary reduced the time spent creating diligence summaries and narrative write-ups. The client estimated a drop from:

    • ~8 hours to ~4 hours for summary work

    • Three weeks to a week and a half or less to produce a report

    Approximately a 50% efficiency gain

  4. Research Assistant: Instant Answers Across Large Documents

    For ad-hoc diligence questions (“Are there any exceptions noted?”), CENTRL’s Research Assistant enabled the team to query directly against source documents and instantly see what the system found and where it came from, replacing hours of searching through long PDFs and financial statements.

    • Previously: answering ~15 targeted questions could take 3 hours to half a day, depending on document size

    • With very large supporting document packs (example cited ~600 pages across prospectus, financial statements, and ad hoc documents), the research assistant significantly reduced “find the needle” time and helped extract relevant passages and example sentences. What used to take the team up to a week to note salient points is now accomplished in or less than a day using AI.

  5. Additionally, responding managers using CENTRL’s Response360 to streamline the now smaller questionnaire decreased their time spent by up to 50%, a dramatic reduction for all parties.

Results & Impact

  1. Faster Manager DDQ Completion

    By auto-populating the majority of answers and routing only the remainder to the manager:

    • DDQs completed ~faster end-to-end –

      • Publishing and due dates reduced from 10 weeks to 6 weeks (with some managers answering under 6 weeks)
    • 3–4 weeks saved per manager in the diligence cycle

    • Managers also benefited, as answers were pulled from their own documents, which relieved them of having to focus on the remaining questions, thereby reducing the “long questionnaire” pushback.

    • Managers Using R360 to streamline the now smaller questionnaire – 8 to 4 weeks worth of work – dramatic reduction for all parties

  2. Cycle Time Compression for Investment Approvals

    When the team needs to invest quickly and ODD approval is gating:

    • The client expects to reduce timelines by ~50%, cutting what used to be 2–3 months down to 4–6 weeks in many cases, thereby supporting faster decision-making and earlier engagement.
  3. Compounding Capacity Gains at Scale

    The team highlighted the compounding effect:

    • If one diligence workflow saves 3-4 weeks per entity, and that applies to ~60 entities in a year, that’s nearly 60 weeks recovered, effectively nearly a full year of efficiency regained across the program.
  4. Reporting Velocity Significantly Increased

    Report drafting historically could take days, and in some cases up to ~3 weeks under pressure (especially when layering in policies, documentation review, and narrative sections). With CENTRL-supported drafting:

    • The client described being able to complete reporting work in 7-10 days in scenarios where it previously required a dedicated multi-week effort

    • More importantly, they noted they could now complete multiple reports in the same time window (e.g., “3 or 4” reports vs. “1”), because the system handles the monotony and baseline content preparation.

What Still Requires Human Judgment (and Why That’s the Point)

The client emphasized that AI doesn’t replace analyst work, it removes the monotony so analysts can spend time on what only humans can do:

  • Reading between the lines

  • Challenging what’s missing from DDQs (e.g., leadership ownership, cross-team interactions)

  • Validating insights onsite and through interviews

  • Adding opinion, context, and ratings where nuance matters

In their words, the platform provides a strong baseline so the team can focus on deeper investigative work rather than “jigsaw-puzzling” information together.

What’s Next: The “Final Mile” with New AI Functionality

The client is expanding into the next phase of AI capabilities:

AI Reporting

A customizable reporting layer that enables the team to build standardized, repeatable report outputs while tailoring structure and wording by vehicle, strategy, or internal requirements, moving from assisted drafting to report creation at scale.

Research Assistant + Chat-Based Analysis

A broader chat interface that lets teams ask questions across any diligence data elements, expanding use cases beyond DDQs into comparative analysis, ongoing monitoring, and strategic partner oversight.

The team specifically referenced future workflows for side-by-side comparisons across managers over multiple years, supporting both:

  • Ongoing confidence and relationship monitoring

  • Strategic partner identification and evaluation (e.g., comparing “strategic partners” vs. the broader manager set)

Summary

By expanding from platform usage into AI-powered automation with Smart Evaluate, auto populate, auto summary modules, and CENTRL’s Research Assistant, the client compressed diligence timelines by up to 40-50% and reduced summary work by ~50% aiming to further increase that rate in the future. It saved 3–4 weeks per manager, and unlocked compounding capacity gains that equate to nearly a full year of efficiency at scale, while preserving analyst time for judgment-heavy work that AI shouldn’t replace.

Similar resources

More resources