Skip to main content
Use case · Chatbot automation

A chatbot that knows the answer. Or the human who does.

A custom assistant trained on your knowledge base, product catalog, and policies. Answers what it can, handovers what it can't, learns from what it gets wrong. Powered by GPT-4 or Claude — you own the prompts, the data, and the deployment.

tickets deflected
first response, where bot answered
CSAT, net
Live signal
2–6 wks Build, fixed price41% tickets deflected7 h → 4 min first response, where bot answered90 days Stabilisation included
What's in the build

Six pieces, one workflow.

An assistant that knows your prices, your policies, your products — and routes to a human when it should.

01 · Knowledge ingestion

Trained on your business, not the internet.

Ingest your KB, FAQs, policies, product catalog, past support tickets. The bot answers in your voice, with your facts. Updates as the source updates.

  • KB + FAQ ingestion
  • Catalog & price feed
  • Past-ticket training
  • Versioned source-of-truth
02 · Channel deployment

On the site, in WhatsApp, in the inbox.

Deploy to website chat, WhatsApp Business, email auto-reply, Instagram DM. One brain, every channel. Handover handled differently per channel.

  • Website widget
  • WhatsApp Business API
  • Email auto-reply
  • Instagram DM
03 · Human handover

When the bot doesn't know, the human does.

Confidence threshold per topic; below it, hand to a human with the conversation context attached. Customer doesn't repeat themselves; the agent picks up clean.

  • Confidence-threshold handover
  • Context attached on handover
  • Per-topic thresholds
  • VIP auto-handover
04 · Action-taking

Beyond Q&A — the bot does the thing.

Order status, refund initiation, booking change, login reset — the bot calls your APIs, not the customer. Every action audit-logged. Permissions per topic.

  • Order/booking lookup
  • Refund initiation (with limits)
  • Login reset
  • Per-action audit log
05 · Continuous improvement

Wrong answers become training data.

Every handover, every thumbs-down, every correction logged. Weekly review queue surfaces patterns. The bot doesn't fail twice the same way.

  • Handover analytics
  • Thumbs-down review queue
  • Correction-as-training
  • Weekly retraining
06 · Operator console

What it answered, what it deflected, what it cost.

Live: deflection rate (tickets the bot resolved without a human), CSAT, cost per resolution, top topics. ROI visible from day one.

  • Deflection-rate dashboard
  • CSAT after bot interaction
  • Cost-per-resolution
  • Top-topic trends
Sample engagement

The DTC brand that automated 41% of support without anyone noticing.

200 tickets a day. Three CS agents. A 7-hour first-response time.

A homewares DTC brand was drowning in 'where's my order?' tickets. We trained a bot on their Shopify, their KB, and their last 12 months of tickets. Deployed to website, email, and Instagram. Two months in: 41% deflection rate, first-response time from 7 hrs to 4 minutes (where the bot answered), CSAT up two points net.

How we measure: Deflection rate measured as tickets resolved without human escalation (n = 6,000 across 8 weeks post-launch), first-response time measured from inbound to first response (bot or human), CSAT via post-interaction 5-point survey.

41%tickets deflected
7 h → 4 minfirst response, where bot answered
+2 ptsCSAT, net
Industries this is built for

Where this build earns its rent.

Most-relevant verticals — but the same shape works for adjacent ones.

A bot that earns its rent.

We ingest your KB, train on your data, deploy where it earns.