How I Automated Demo Request Qualification with HubSpot and OpenAI
I built a simple workflow that scores inbound demo requests in under 60 seconds, routes hot leads to sales, and filters out poor-fit submissions before they hit the calendar.
I Was Losing Good Leads in My Own Inbox
I used to treat every demo request the same.
Someone filled out my form, I got a notification, and then I manually checked their website, company size, and use case before deciding if I should respond fast, hand them to a nurture sequence, or politely decline.
That sounds fine until volume picks up.
On busy weeks, I had 20 to 30 inbound requests. By the time I reviewed all of them, my best leads were waiting hours for a reply. A few never booked because someone else got to them first.
So I rebuilt the process with one goal: qualify every inbound lead in under a minute, then route them automatically.
This post breaks down the exact workflow I use with HubSpot and OpenAI.
What This Automation Actually Does
Every new demo request is scored on three things:
- Fit: Is this the type of company I can help?
- Intent: Are they actively trying to buy or just browsing?
- Urgency: Do they need a solution now or "sometime this year"?
Then the workflow takes one of three actions:
- High score: send instant priority follow-up + alert sales
- Mid score: send standard follow-up + add to nurture
- Low score: send helpful resources + keep out of the main sales queue
The main win is speed. The second win is focus. My calendar stays open for serious buyers.
My Stack
I kept this simple on purpose:
- HubSpot Forms for capture
- HubSpot Workflows for routing and lifecycle updates
- OpenAI API for qualification scoring from form text
- Make for orchestration between HubSpot and OpenAI
- Slack for hot lead alerts
You can swap Make with Zapier or n8n. The logic stays the same.
The Qualification Prompt I Use
The quality of this setup lives or dies on the prompt.
I pass these fields into OpenAI:
- job title
- company name
- company website
- company size (if collected)
- use case text
- biggest challenge text
- timeline
- budget range
Then I ask the model to return strict JSON with:
fit_score(0-100)intent_score(0-100)urgency_score(0-100)overall_score(weighted)reasoning_short(one sentence)segment(hot,warm,low)
I also include hard disqualifiers in the prompt. Example: student projects, no business email, or requests outside my service area.
That one change cut bad-fit bookings quickly.
The Routing Rules That Matter
Here are the exact thresholds I started with:
- Hot lead:
overall_score >= 75 - Warm lead:
overall_score 45-74 - Low lead:
overall_score < 45
And the actions:
Hot
- Set HubSpot lifecycle stage to
SQL - Create high-priority task for same-day follow-up
- Send Slack alert with score + summary
- Send fast email with direct booking link
Warm
- Set lifecycle stage to
MQL - Assign to nurture sequence (3-email educational flow)
- Send normal response within minutes
Low
- Keep in lead database
- Tag as
low-fit - Send resources email instead of pushing a sales call
The important part is not being "nice" to your pipeline. If someone is a poor fit, forcing them into sales hurts both sides.
The Follow-Up Emails (Short and Useful)
I stopped using long polished templates. They performed worse.
My highest-performing hot lead email is still plain text and short:
Subject: Quick next step
Hey {{first_name}},
Thanks for reaching out. Based on what you shared, I think we can help.
If you want, grab a time here and I will come prepared with a plan for your use case: {{booking_link}}
- Wesso
No hype. No fake urgency. Just a clear next step.
Results from the First 30 Days
After one month, here is what changed:
- Median first-response time: from 4h 10m to 11m
- Hot lead contact rate (same day): from 41% to 93%
- Unqualified calls on calendar: down 38%
- Booked calls from inbound demo form: up 27%
The biggest surprise was not conversion lift. It was mental load. I no longer start my day digging through form submissions trying to guess who matters most.
Mistakes I Made (So You Can Skip Them)
1) I trusted company size too much
Some startups with 8 people were ready to buy immediately. Some 200-person teams were early research mode. Use size as a signal, not a decision.
2) My first prompt was too generous
Early version over-scored polite leads with vague answers. I fixed this by explicitly penalizing generic text like "just exploring options" with no timeline.
3) I over-automated the low-score path
At first, low-score leads got ignored. Bad move. A few came back months later and became real deals. Now every low-score lead gets useful content and stays in a light nurture track.
A Practical Build Plan (You Can Copy This Today)
If you want to launch this quickly, do it in this order:
- Add better form fields
- Ask timeline and use case directly
- Add work email requirement
- Create one scoring scenario in Make
- Trigger: new HubSpot form submission
- Action: OpenAI scoring
- Action: update HubSpot properties
- Set simple thresholds
- 75+ hot, 45-74 warm, under 45 low
- Write three email templates
- One per segment
- Review scores daily for one week
- Manually spot-check 20 submissions
- Tune prompt and thresholds
Most teams can ship version one in a single afternoon.
Should You Build This?
If your sales team responds to every form manually, yes.
If inbound volume is tiny, you can wait.
But once you have consistent weekly demos, qualification automation pays for itself quickly. You answer faster, book better calls, and stop wasting sales time on clear misfits.
The key is to keep a human in the loop for tuning. Let AI handle first-pass sorting. Let your team handle judgment, strategy, and closing.
That split works.
Conclusion
Lead qualification is one of those tasks that feels small until it quietly wrecks your pipeline.
A lightweight HubSpot + OpenAI workflow fixed that for me. Not with a giant RevOps overhaul. Just faster triage, better routing, and cleaner follow-up.
If you want better inbound performance this quarter, start here. It is one of the highest ROI automations I have implemented this year.
Wesso Hall
Writing about AI tools, automation, and building in public. We test everything we recommend.
Enjoyed this article?
Get our weekly Tool Drop — one AI tool breakdown, every week.
Related Articles
AI Lead Scoring Doubled Our Close Rate (Here's the Exact System)
I built an AI-powered lead scoring system that automatically ranks prospects by their likelihood to buy. After 3 months, our sales close rate jumped from 8% to 17%. Here's exactly how it works.
I Built an AI Pricing Calculator and Here's What I Learned
After burning $400 in API costs by misconfiguring my AI tool, I built a simple calculator to predict OpenAI, Claude, and Gemini costs. Here's how it works and the free tool.
How I Automated Lead Routing in HubSpot with OpenAI and Slack
A practical step-by-step guide to auto-qualify inbound leads, route them to the right owner, and alert sales in Slack without building a fragile maze of rules.