vadim
Vadim Katcherovski
5 min read
0

In last week‘s post, I talked about our sales process automation. Today, we’re switching to the customer success department.

Last year, we lost an account we thought was happy. Usage looked fine in our analytics. The quarterly check-in had gone well. The CSM had no reason to worry. Then the customer didn’t renew. When we dug into what happened, the signals had been there for months. Usage was slowly declining. There were a number of concerning support tickets. The champion who loved our product had quietly moved to a different role. All of that information existed in our systems. However, it was not connected together.

That account was the wake-up call. We realized our CS team didn’t have a single source of truth for all customer data. Product usage lived in one system. Support tickets in another. Contract and billing data in a third. Live chat – yet another product.

So we built a system that scores every account, every night, using data from all available sources. It runs at 2 AM. By the time the CS team opens their dashboard in the morning, every account has a score from 0 to 100, a risk tier, and a comparison to yesterday. In its first week, it flagged three accounts our team thought were fine. They weren’t.

The Problem with “How’s Everything Going?”

Often, customer success is reactive. A renewal comes up. The CSM schedules a call. They ask “how’s everything going?” The customer says “fine.” The renewal closes.

Then three months later, usage drops. Support tickets spike. The champion leaves. And nobody noticed because all the signals were in different systems.

Your CRM knows about the contract. Your product analytics know about usage. Your support tool knows about tickets. Your billing system knows about revenue. But no single person, and no single screen, has the full picture. Everyone has a slice. Nobody has the whole story.

This is the context problem we keep coming back to in this series. And customer success is where it hits hardest, because by the time a CSM notices the warning signs in their slice of data, the customer has already mentally checked out.

Why Not Just Buy a CS Platform?

There are good customer success platforms out there. Gainsight is the enterprise standard. Totango and ChurnZero serve mid-market well. Vitally is popular with startups. Any of them could give us health scores, playbooks, and dashboards out of the box.

We evaluated them. Here’s why we built our own.

Integration depth. Our data lives in specific places: an in-house tool with billing, contracts, and account metadata. Amplitude for product usage. Zendesk and Intercom for support. HubSpot as a CRM. Every CS platform promises “integrations,” but in practice you’re limited to what their connectors support. We needed to query Amplitude’s segmentation API with account-level aliases, pull Zendesk tickets in bulk and aggregate by email domain, and do a two-pass fetch on Intercom conversations. No off-the-shelf connector does that without custom work anyway.

The scoring model is the product. Generic health scores based on login frequency and NPS don’t tell us much. We needed a model tuned to our specific churn patterns: licence utilisation thresholds calibrated against our actual churned accounts, feature adoption across our specific 12 feature categories, support penalties weighted by our ticket severity patterns. That model is a few hundred lines of code. It’s the easiest part to build and the hardest part to buy, because no vendor knows what predicts churn in your business better than you do.

The build took a few weeks. It costs us a few dollars a month to run. And we own every assumption in the scoring model, which means we can change it tomorrow if we learn something new.

What We Built

We call it CS Copilot. It’s an automated pipeline that runs every night at 2 AM. It pulls data from multiple sources, computes a health score for every account, and writes the results to a dashboard our CS team checks every morning.

Here’s what goes into the score.

Three Signals, One Score

Signal 1: Licence utilisation

This is the simplest and most important metric. How many people are actually using the product compared to how many licences the customer is paying for?

An account using 80% or more of its licences gets the full points. Below 20%, they get zero.

The thresholds aren’t arbitrary. We looked at our churned accounts from the past two years. Every single one had utilisation below 40% in the three months before cancellation.

Signal 2: Activity trend

Utilisation tells you where an account is. Trend tells you where it’s going. We compare the current 30-day window to the prior 30-day window and calculate the percentage change.

There are three tiers of a trend: Growing (10%+ increase), Stable (within 10% either way), Declining (10-30% drop). Anything below 30% decline gets zero. That last category is a five-alarm fire.

This signal has caught problems that raw utilisation missed. An account can have 70% utilisation and still be in trouble if that number was 90% two months ago. The trend tells the story the snapshot can’t.

Signal 3: Feature adoption (up to 15 points)

We track usage of 12 feature categories in our product.. The score reflects what percentage of available features an account is actively using.

Using 9 or more categories gets the full 15 points. Using fewer than 3 gets zero.

Why this matters: customers who use only one or two features are fragile. Their entire relationship with the product hangs on a single workflow. If a competitor does that one thing better, there’s nothing else keeping them. Customers who use 8-10 features are sticky. Switching costs are real. They’ve woven the product into how they operate.

The Penalty Layer

A score based only on usage would miss a critical signal: support friction. An account can have 80% utilisation and still be furious if they’ve filed 11 tickets this month and three of them are urgent.

So we apply penalties from two support systems.

We pull data from both our email-based support (Zendesk) and chat support (Intercom) and apply penalties based on three factors: how many tickets were filed recently, how many are still unresolved, and whether any of them are high severity. These stack, but the combined penalty caps at 20 points so support friction can’t wipe out an otherwise healthy account.

The chat channel also gives a bonus. If conversations are getting resolved quickly, if our AI is handling routine questions successfully, and if the customer is actively engaged, the account can earn up to 10 bonus points.

The Math Under the Hood

The base score adds up to 100, penalties and bonuses shift it from there, and we assign a tier:

  • Healthy (80+): Green. No action needed.
  • Watch (60-79): Yellow. Keep an eye on it.
  • At Risk (40-59): Orange. Reach out.
  • Critical (below 40): Red. This account needs intervention now.

What the Dashboard Shows

The CS team doesn’t see the math. They see a portfolio view.

Four cards at the top: total accounts, portfolio ARR, average health score, and number of at-risk accounts. Below that, a “Needs Review” section showing the 10 highest-ARR accounts that are at risk or critical. Because not all at-risk accounts are equal. A critical account worth 200K/year needs attention before a critical account worth 5K/year.

The main table shows every account with a colour-coded health bar, the score, the daily delta (did this account go up or down since yesterday?), the owner, ARR, and renewal date. Renewal dates are colour-coded too: red if expired, orange if within 30 days, yellow if within 90 days.

Click on any account and a detail panel slides in showing the full score breakdown: how many points from utilisation, trend, and feature adoption. Which of the 12 feature categories are active. What the support penalties are. A 7-day score history sparkline. Everything the CSM needs to understand why an account is where it is, without asking anyone.

CSMs see only their own accounts. Supervisors see everything.

What Changed for the Team

The daily standup changed. Instead of “how are your accounts?” it became “who moved into Watch or Critical since yesterday?” The delta column turned vague check-ins into specific conversations.

Renewal prep changed. Before a renewal call, the CSM pulls up the detail panel and sees the full picture: usage trend, feature adoption, support friction, all in one place. They walk into the call knowing whether the customer is actually happy or just politely disengaged.

And the CS leader’s job changed. Instead of asking each CSM for subjective account updates, they open the portfolio view, sort by at-risk, sort by ARR, and know exactly where to focus. The dashboard didn’t replace judgment. It replaced the data gathering that used to eat half the meeting.

Where This Is Going

The current system is good at answering “is this account healthy right now?” It’s not yet good at answering “why is this customer frustrated?” or “what should the CSM do about it?”

That’s the next layer. Here’s what we’re building toward.

Support sentiment. The current system counts tickets and checks severity. It doesn’t read them. An account with 3 tickets that all say “this is broken and I’m losing patience” is in a different place than one with 3 tickets that say “quick question about a new feature.” We want the scoring model to know the difference.

NPS and CSAT integration. Survey scores are lagging indicators, but they’re still useful. A customer who gives you a 6 on NPS while their usage is stable is telling you something their behaviour hasn’t shown yet. We want to fold survey responses into the score as another signal.

Proactive alerts. Right now, the CSM opens the dashboard and looks. The next step is push notifications: when an account drops two tiers in a week, when a renewal is 30 days out and the score is declining, when sentiment turns negative across multiple channels at once. The system should find the CSM, not the other way around.

None of this requires a fundamentally different architecture. The data pipeline is built. The scoring model is modular. Each new signal is another input that adjusts the score. That’s the advantage of building your own: the hardest part (the plumbing) is done. Adding intelligence on top is the easier, more interesting work.

Try This at Your Company

You don’t need our exact stack. Here’s the framework:

  1. Pick your three signals. Usage, trend, and support friction cover most SaaS businesses. The specific metrics will depend on your product, but the categories are universal.
  2. Weight by what predicts churn. Look at your churned accounts from the past year. What did they have in common 90 days before cancellation? That’s your scoring model.
  3. Include support data. Usage alone isn’t enough. A customer who uses your product daily and files urgent tickets every week is not healthy. They’re trapped.
  4. Handle missing data gracefully. Your scoring model will encounter nulls, empty fields, and mismatched identifiers. Design for it. A score that says “incomplete data” is more useful than a score that says “critical” because someone forgot to enter a licence count.
  5. Show the “why,” not just the number. A score of 45 means nothing without context. Show the breakdown. Show what’s dragging it down. Make the next action obvious.

What’s Next

Next week, we’re stepping back from function-specific stories to talk about the context problem that runs through everything we’ve built. Why 80% of the work in AI automation is getting the data right. The scattered systems, the identifier mismatches, the API rate limits, the data quality issues that nobody warns you about. It’s the least glamorous topic in this series and the most important one.

Subscribe to get weekly posts delivered to your inbox. No spam, no hype. Just the real story from inside the build.

Stay updated on Birdview's AI Automation Journey

This is Part 4 of “Becoming AI-Native,” a weekly series from the Birdview PSA team on our AI transformation journey. Follow along here on Birdview’s blog, on Vadim’s LinkedIn

Related topics: AI Automation

Related Posts

AI Automation

A journey to becoming an AI-native company

AI Automation

Our best sales coach works at 2 AM and never takes a day off

AI Automation

Automating sales: How AI agents handle our post-call workflow end to end

Birdview logo
Nice! You’re almost there...

Your 14-day trial is ready! Explore Birdview's full potential by scheduling a call with our Product Specialist.

The calendar is loading... Please wait
Birdview logo
Great! Let's achieve game-changing results together!
Start your Birdview journey with a short 9-min demo
Watch demo video