AI Assistant Handles 20% of Queries and Saves £300K/Month for Fintech Platform

Industry: Fintech

Client

Fast-growing fintech platform for entrepreneurs, serving business clients with banking, accounting, and business management tools.

Goal

Build an AI Assistant as a daily business companion for entrepreneurs on a fast-growing fintech platform, handling support queries, executing banking operations, and delivering personalised financial expertise. The primary objective was to scale client service capacity without proportional headcount growth, while maintaining the high NPS the platform was known for.

Challenges

  • Reducing support load in a high-trust financial context, where redirecting clients away from human agents risked damaging NPS and client relationships.
  • Validating whether an AI assistant could deliver sufficient accuracy and earn client trust in a regulated financial context before committing to full-scale development.
  • Large language models require full conversational context to answer well, but in banking that context contains protected personal and financial data that cannot be passed to external models under strict financial-services regulation.
  • The internal knowledge base was fragmented and not designed for AI use, making it impossible to deliver accurate, up-to-date responses at scale without a structural rebuild.

Solution

Our Fractional Head of AI built a functioning prototype in one week capable of answering questions across a single banking topic area with 87% accuracy. Working with two engineers, the team integrated the prototype into the live product interface to produce a proof of concept that confirmed both technical viability and client willingness to engage with AI-assisted support.

A proprietary data-masking layer was developed and applied before any data reached the LLM, covering more than 20 PII categories, including information clients enter mid-conversation. This preserved full conversational context for the model while ensuring no personal or financial data were transmitted or stored. The masking was complemented by a three-stage automated quality-validation process assessing tone, answer relevance, and context usage.

The assistant was positioned as a voluntary alternative to human support rather than a replacement. Adoption was driven through contextual UX entry points tied to specific client moments. Within 12 months, 20% of all support queries were handled voluntarily by AI, and the assistant’s NPS reached 8.7 compared with a company average of 7.5.

A proprietary machine-readable knowledge system was designed and built with structured fragments optimised for LLM retrieval. A governance methodology was established to ensure every product or process change is reflected in the knowledge base within 24 hours, eliminating information drift as the platform scaled.

Impact:

Achieved 60K MAU and 25% day-28 retention within 12 months of launch, driven entirely by voluntary adoption with no forced routing to the AI channel.

20% of all support queries were handled voluntarily by AI in a sector where clients strongly default to human support; the assistant’s NPS reached 8.7 versus a company average of 7.5.

Saved £300K per month in avoided hiring costs as the client base scaled, with no proportional increase in support headcount.

Winner, International AI Championship, Abu Dhabi, 2025

Context

A fast-growing fintech platform for entrepreneurs provided banking, accounting, and business management tools for business clients. The organization needed to scale client service capacity without proportional headcount growth while preserving the high Net Promoter Score (NPS) it was known for. The objective was to build an AI assistant as a daily business companion for entrepreneurs: one that could handle routine support queries, execute authorised banking operations, and deliver personalised financial expertise. Given the regulated financial context and the high-trust relationships with clients, the team treated the assistant as a voluntary alternative to human agents rather than a mandated replacement, and set out to validate technical viability, accuracy, and client willingness to adopt before committing to full-scale development.

Challenges

Key challenges were regulatory and behavioural. LLMs deliver the best answers when they have full conversational context, but that context in banking contains protected personal and financial data that cannot be passed to external models under strict financial services regulation. The platform’s internal knowledge base was fragmented and had never been designed for AI retrieval, making accurate, up-to-date responses impossible without a structural rebuild. Operationally, clients in this sector default strongly to human support and could not be forced away from agents without risking NPS and long-term customer relationships. Finally, the team needed to validate whether an AI assistant could deliver sufficient accuracy and earn client trust before investing in full development and scaling.

Implementation

The Fractional Head of AI led a hands-on validation and delivery programme. A working prototype was built in one week that answered questions across a single banking topic area with 87% accuracy. Two engineers integrated the prototype into the live product interface to produce a proof of concept that confirmed both technical viability and client willingness to engage with AI-assisted support. To address data protection, the team designed and built a proprietary data masking layer applied before any data reached the LLM; this covered 20+ PII categories, including data clients enter themselves mid-conversation, preserving model context while ensuring no personal or financial data was transmitted or stored. Recognising the fragmented knowledge base, the team created a proprietary machine-readable knowledge system composed of structured fragments optimised for LLM retrieval, and instituted a governance methodology that ensured every product or process change was reflected in the knowledge base within 24 hours to eliminate information drift. Quality and compliance were enforced via a three-stage automated validation pipeline assessing tone, answer relevance, and correct context usage. The assistant was positioned as an opt-in channel and surfaced through contextual UX entry points tied to specific client moments (for example, transaction issues or account setup steps), which encouraged voluntary adoption without forced routing.

Results

Within 12 months of launch the assistant achieved 60K monthly active users and 25% Day-28 retention, driven entirely by voluntary adoption. Twenty percent of all support queries were handled voluntarily by the AI in a sector where clients strongly default to human support. The assistant achieved an NPS of 8.7, outperforming the company average of 7.5, demonstrating that clients trusted the AI as an alternative channel. Operationally, the platform avoided £300K per month in hiring costs as the client base scaled, with no proportional increase in support headcount required. The solution also received external recognition, winning the International AI Championship in Abu Dhabi in 2025. The combined approach—rapid prototyping, strict data masking, a machine-readable knowledge layer with tight governance, and voluntary, context-driven adoption—validated that an AI assistant can deliver sufficient accuracy and trust in a regulated financial context while materially reducing support load and protecting client relationships.

*Case studies reflect work undertaken by our Heads of AI either during their tenure with Head of AI or in prior roles before they were part of the Head of AI network; they are provided for illustrative purposes only and are based on conversations with our Heads of AI.