The prospect was halfway through a discovery call last quarter, asking the kind of question that used to slow our deals down. “How long would it take to integrate this with our ERP instance?”
4 months ago, the honest answer was “we’ll get back to you in a day or two.” Our Business Analyst (BA) would read the call notes, dig through the Swagger spec for the endpoints involved, pull our library of past integration patterns, write a draft estimate, hand it to a developer for a sanity check, and send it back to the Account Executive (AE). Half a day of focused work per estimate, longer when a question needed clarifying. Multiply by the volume of estimates flowing in from sales calls and you got a queue. Some estimates sat for a day. A few sat for three or four. The deals at the top of the queue moved fast. The ones in the middle of the queue lost some of their momentum.
Last quarter, that prospect on the call got an answer in under 5 minuteы. The AE typed /estimate in Slack with the integration ask. The agent did the work the BA used to do. A draft estimate landed back in the channel before the discovery call was over. The BA reviewed it after the call, adjusted two line items, and signed off. The customer had a written, defensible quote in their inbox the same afternoon.
The Old Workflow
The integration estimate had been a manual, multi-step process for years. It was nobody’s full-time job, but it absorbed serious time from people whose actual jobs were elsewhere.
The BA was the central piece. Given a request like “estimate the effort to sync deals from HubSpot into Birdview projects,” she’d open three or four tabs. Our internal Swagger spec for the Birdview API. The third-party API docs for the source system. The library where we track standard patterns for common integration types and the hours each tends to take. The CRM notes from the discovery call. She’d produce a draft estimate covering the endpoints involved, the field mappings, the auth setup, and the typical surprise areas like rate limits, pagination, and webhook reliability. She’d hand the draft to a developer for review. The developer would catch the edge cases the BA hadn’t seen. The AE would get the final estimate back, usually a day or two after the discovery call.
The whole loop was four to eight hours of total people-time per estimate, depending on how exotic the integration was. None of it was difficult work. Most of it was lookup and pattern matching. The actual creative judgment was maybe 15 minutes of the BA’s day, surrounded by hours of opening tabs.
What We Built
A Slack slash command, /estimate, plus an Azure Function behind it, plus a Claude agent with four tools.
The four tools were chosen by tracing what the BA actually did in front of her screen.
Web search. When the request involves a third-party system the agent hasn’t seen before, it can read the public API docs and figure out the relevant capabilities.
Birdview Swagger reader. The agent queries our own API spec to identify the endpoints required for a given integration, the auth flow, and the data model on our side.
Google Sheet reader. The “Estimation calculations” sheet has years of past integrations with their hour breakdowns. The agent reads the sheet directly and grounds its estimate in patterns we’ve actually seen, not in generic guesses about how long an integration “should” take.
Ask-user. The agent has explicit permission to come back to the AE in Slack and ask a clarifying question when the request is ambiguous. “Is this a one-way or two-way sync?” “Do you need historical data backfilled or just new records?” The first version of the agent didn’t have this tool, and the estimates were noisier because of it.
The output is two documents. A short Slack summary in the channel where the request came from, with the headline number and a one-paragraph rationale. And a Google Doc in our Shared Drive with the full breakdown: endpoints, fields, hours by phase, assumptions, and the open questions the BA still needs to answer. The Google Doc is the artifact the BA reviews and the AE forwards. The Slack summary is the artifact that lets a deal keep moving in real time.
What Surprised Us
The first thing that surprised us was how often the BA’s review was just confirmation. Not “this estimate is wrong, let me redo it.” More like “the agent flagged the same three risk areas I would have, the hours look right, I’d add four hours for the staging environment work but otherwise sign it off.” That kind of review takes ten minutes. The hours she used to spend on lookup are gone.
The second surprise was the failure mode we expected versus the one we got. We assumed the agent would hallucinate, inventing endpoints that don’t exist or assuming fields that aren’t in the schema. The actual failure mode, before we added the ask-user tool, was overconfidence. The agent would produce a clean, plausible-looking estimate based on incomplete information rather than admit it didn’t know. Once the agent could ask, the estimates got smaller, not because it knew less, but because it was honestly bracketing the unknowns it had pretended weren’t there.
The third surprise was on the sales side. Our AEs started running estimates during discovery calls. The thing that used to be a follow-up artifact became a real-time conversation tool. A prospect could ask “what would it take to wire this into Salesforce” and get a number while still on the call. A few prospects told us, unprompted, that getting the figure that fast made us feel more buttoned-up than the alternative they were evaluating. We didn’t expect that. We were optimizing for our BA’s calendar, not our positioning.
What It Can’t Do
The agent is bad at exotic integrations. If the prospect wants to connect to a system the agent has no public docs for, or a system where the API behavior is different from what the docs say, the estimate will be a guess. The BA still has to step in and rewrite from scratch in those cases, and we flag those upfront so she’s not surprised.
The agent is also bad at scoping politics. Sometimes an estimate is being requested as a starting point for a negotiation, not as a precise number. A human reading the room knows the AE wants a generous estimate to leave room. The agent gives a defensible estimate every time, which is the right default, but the BA occasionally still rewrites for the conversation she knows is happening.
And the agent will not catch the kind of risk that only shows up when you’ve worked with a particular customer for years. “This is the customer whose IT team rejects every webhook integration.” That kind of context lives in the BA’s head, and it should. The agent provides a draft. The human provides the wisdom.
The Takeaway
If you have a workflow where a knowledgeable person spends most of their time looking things up and a small fraction of their time making judgment calls, that workflow is a good candidate for an agent. The agent is not replacing the judgment. It is removing the lookup that surrounded the judgment, so the human has more capacity for the judgment and the calendar around it gets shorter.
The estimator works because the inputs are written down. Our Swagger spec is up to date. The Google Sheet has years of clean data. The third-party APIs have public docs. If those inputs were sitting in someone’s head instead of in machine-readable form, no agent could replicate the BA’s work, no matter how good the model. The agent above the workflow only works when the data underneath is honest and current.
What’s Next
Next week we close the series with the operational layer underneath all of these projects. The documented schemas, the canonical identifiers, the permissioning, the unglamorous foundation that makes every agent above it possible. The estimator is one example. The same shape repeats in health scoring, the coaching pipeline, and the marketing pipeline. None of them work without the work nobody talks about.
Stay updated on Birdview's AI Automation Journey
This is Part 7 of “Becoming AI-Native,” a weekly series from the Birdview PSA team on our AI transformation journey. Follow along here on Birdview’s blog, on Vadim’s LinkedIn