Didja Split Test It? The Insurance Agent's Guide to A/B Testing That Actually Matters
Hosts of The Insurance Dudes Podcast — 1,000+ episodes helping insurance agents build elite agencies

Every decision in your agency that touches client acquisition or conversion is a hypothesis. You believe the email subject line works. You believe the call script closes better. You believe the landing page converts well. But do you actually know? Or are you making those calls based on gut feel, past experience, and what sounded good in a meeting? Split testing, deliberately running two versions of the same thing to see which performs better, is how you replace assumption with evidence. And most insurance agencies have never done it once.
Why Opinions Are Expensive and Data Is Cheap
There's a common pattern in insurance agency marketing decisions. Someone has an idea. The idea sounds reasonable. Maybe they've seen something similar at another agency, or a vendor recommended it, or it just feels right. The agency implements the idea without any mechanism for measuring whether it works better than what they were doing before. Three months later, the idea is either still running (because nobody pulled the data) or was abandoned (because somebody got bored with it).
This pattern produces an agency that perpetually cycles through marketing initiatives without ever accumulating the knowledge that comes from understanding why something works when it does and why it doesn't when it doesn't.
Split testing is the alternative. Instead of implementing an idea and hoping, you implement the new idea alongside the existing approach, run both simultaneously with a split of your volume, and measure the results with enough statistical weight to draw a meaningful conclusion. Then you adopt the winner, archive the loser, and run the next test.
The cost of building this habit is low. The cost of not building it is an agency making hundreds of consequential decisions every year based on opinion rather than evidence.
Where to Split Test in an Insurance Agency
Most agents hear "A/B testing" and think of email subject lines, which is valid but represents a small fraction of the places where testing can drive improvement.
Email outreach. Subject line, call-to-action language, send time, and email length are all testable. Even small improvements in open rates compound across a list of any size. A 10% improvement in open rate on a list of 2,000 clients means 200 more people reading your message every time you send one.
Call scripts. Your producers are running a script, even if it isn't written down, they're using an approach that's become habitual. Split testing a modified opening, a different objection response, or an alternative value proposition framing on a sample of calls produces data about what actually moves the needle. This is uncomfortable because it requires honest measurement, but it's some of the most valuable testing you can do.
Landing pages and quote forms. If you have any digital marketing, a Google ad campaign, social media lead generation, anything that sends traffic to a web page, you have an A/B testing opportunity. Which headline generates more form fills? Which layout produces more quotes? Which call-to-action language converts? These questions have specific, testable answers.
Follow-up sequence timing and content. The timing and language of your lead follow-up sequence affects contact rates and conversion rates in ways that testing can reveal. Does a same-day text followed by an email the next day outperform three phone calls in 24 hours? Does a value-first message in the follow-up email convert better than a direct ask? These are empirical questions with answers you can find.
The Rules That Make Testing Valid
There are two things that make a test meaningful and two that make it meaningless.
What makes it meaningful: a large enough sample to draw conclusions from, and only one variable changed at a time. If you test a different email subject line and a different call-to-action in the same test, you can't know which variable drove the result. Change one thing, run enough volume, measure the right outcomes.
What makes it meaningless: a sample too small to reach statistical significance, and confirmation bias in the measurement. Humans are remarkably good at finding the evidence that supports what they already believed. Building in objective measurement, ideally, having someone other than the idea's originator do the analysis, removes that distortion.
The threshold for "large enough" varies by channel and the size of the expected effect. A test on 20 calls can't tell you much. A test on 200 calls, run over two weeks with consistent conditions, starts to generate signal worth acting on.
What This Means for Your Agency
Pick one thing to test this month. Just one. Not three initiatives simultaneously, one clearly defined test with a specific hypothesis, a clean split, a defined measurement period, and a commitment to acting on the result.
Write the hypothesis before you run the test: "I believe that changing X to Y will improve Z by approximately W." That written hypothesis prevents the post-hoc rationalization that turns any result into confirmation of the prior view.
Run the test. Measure honestly. Act on what you find. Then run the next test. The agency that's been running structured tests for two years has accumulated knowledge that cannot be bought, copied, or replicated by competitors who are still making decisions by feel.
The Bottom Line
Split testing isn't a tech-company thing. It's a discipline for anyone who makes decisions under uncertainty, which is every agency owner, every day. The agencies that build the habit of testing early and running tests consistently will make better decisions in perpetuity than agencies that rely on intuition and conventional wisdom. The question is simple: didja split test it? If not, now you know what to do about it.
Catch the full conversation:
About the Insurance Dudes: Craig Pretzinger and Jason Stowasser are agency owners, coaches, and the hosts of The Insurance Dudes podcast, built for agents who want to grow without losing their minds.
Level up your agency:
Listen to The Insurance Dudes Podcast
Get more strategies like this on our podcast. Available on all platforms.
Related Episodes

The Only 4 Ways to Rocket Revenue in Your Insurance Agency

Internet Leads, Dialed In: Jorge Carbonell Completes the Digital Lead Playbook (Part 2)

Own Your Traffic: Justin Thomas on Why Insurance Agents Need to Bring Their Marketing In-House

Culture Eats Strategy for Breakfast: Kelly Donahue's Framework for Building a Future-Ready Insurance Agency

2022 Six Step Success #3: Marketing — Building a Lead Engine That Actually Runs
