Stop Solving Agency Problems on Gut Instinct: A Scientific Framework for Insurance Operations

By Craig Pretzinger & Jason Feltman6 min read

Hosts of The Insurance Dudes Podcast — 1,000+ episodes helping insurance agents build elite agencies

Stop Solving Agency Problems on Gut Instinct: A Scientific Framework for Insurance Operations

Every insurance agency runs on a combination of systems and gut feelings. The systems handle what's been formalized. The gut feelings handle everything else, which, in most agencies, is an enormous amount. Jason Feltman makes a compelling argument that the gut-feeling approach to problem-solving is not just inefficient but actively counterproductive, and that agencies willing to apply a more scientific methodology to their operational challenges will find solutions that actually hold.

The Gut Feeling Problem in Agency Operations

The diagnostic moment that opens this conversation is one most agency owners recognize: a problem appears, a drop in close rates, a service complaint pattern, a team dynamic that's creating friction, and the first response is to ask "who" rather than "why." Who is underperforming? Who is creating the tension? Who dropped the ball on that client situation? The who-question feels productive because it identifies a specific person to address, but it almost always leads to the wrong solution.

The who-question produces personnel responses to process problems. You coach the underperforming producer, hoping the coaching addresses the underlying issue, when the underlying issue might be a lead source quality problem, a script gap, or a scheduling constraint that affects everyone. You address the tension between team members, hoping the interpersonal intervention resolves the friction, when the friction is caused by overlapping responsibilities that no amount of team-building will rationalize.

Jason's framework starts with the same reframe: problems in insurance agencies almost always have systemic causes rather than individual causes, and solving them requires identifying the systemic root rather than the human symptoms. This doesn't mean people are never part of the problem, sometimes they are, but it means you should arrive at that conclusion through investigation rather than assumption.

The scientific approach he advocates is essentially the scientific method applied to operational problems: observe the problem precisely, hypothesize about root causes, test specific interventions, measure results, and iterate. This is how clinical researchers solve problems. It's how engineers debug complex systems. It's how the best-run agencies in the country approach operational challenges, and it's available to every agency owner who's willing to trade gut-feel immediacy for structured clarity.

The Framework That Turns Problems Into Processable Decisions

Step one is precise observation, defining the problem with specificity rather than generality. "Our close rate is down" is not a problem statement; it's a performance indicator. "Our close rate on internet leads sourced from vendor X has dropped from 22% to 14% over the past 45 days, with the steepest decline in the first-contact conversation" is a problem statement. The difference in specificity is the difference between knowing where to look and wandering.

Step two is hypothesis generation, listing the most likely root causes without prematurely committing to any of them. For the close rate example: vendor lead quality change? Producer personnel change? Script or conversation quality change? Lead response time change? Competitive market condition change? Each hypothesis points to a different investigation and a different potential intervention. Generating multiple hypotheses before investigating prevents the confirmation bias that leads to solving the wrong problem with conviction.

Step three is systematic testing, choosing the most likely hypothesis and designing a specific test. This might mean pulling the data on lead response times for the period of decline and comparing to the baseline period. It might mean reviewing call recordings from the affected time period to look for conversation quality changes. It might mean requesting lead quality data from the vendor. The test is designed to confirm or rule out the specific hypothesis, not to gather general data.

Step four is measurement and learning, evaluating the test results honestly and updating the hypothesis set accordingly. If the data shows response time hasn't changed but conversation quality has, you've ruled out one cause and confirmed another. If neither changes explains the decline, you have two hypotheses eliminated and a clearer picture of where to look next.

Step five is intervention and iteration, implementing a specific change designed to address the confirmed root cause, then measuring whether the change produces the expected result. If it does, you have a resolved problem and a documented solution. If it doesn't, you have a new data point and a more refined hypothesis.

This cycle, observe, hypothesize, test, measure, intervene, iterate, is not faster than gut-feel in the short run. It's dramatically faster in total, because it produces solutions that actually work rather than interventions that feel productive but leave the underlying problem intact.

What This Means for Your Agency

Identify one persistent operational problem in your agency, something that has come up multiple times, been addressed multiple times, and keeps recurring. Apply the precise observation step: what exactly is the problem, when does it occur, how severe is it, and what are the specific conditions under which it appears? Write this down with as much specificity as you can. That act alone will often reveal that the problem is different from what you've been solving.

Then generate at least three hypotheses about root causes before taking any action. Force yourself to consider systemic explanations, process gaps, workflow misalignments, communication failures, alongside individual explanations. Even if you ultimately conclude the problem is person-specific, the process of considering alternatives will sharpen your diagnosis.

Finally, commit to measuring the result of any intervention rather than assuming it worked. This is where most agency owners stop being scientific, they implement a solution and move on, assuming the resolution. One follow-up measurement at 30 days and another at 90 days will tell you whether your intervention actually held, and give you the information to iterate if it didn't.

The Bottom Line

The insurance agency problems that persist for years, get addressed repeatedly without resolution, and frustrate even talented owners almost always have a systemic root cause that gut-feel interventions keep missing. The scientific framework Jason Feltman advocates, precise observation, systematic hypothesis, structured testing, and honest measurement, is not a complex methodology. It's disciplined curiosity applied to the problems you already know you have. Apply it consistently and the problems that have been costing you for years will start revealing their actual causes. And causes, once identified, can be fixed.


Catch the full conversation:

Level up your agency:

Listen to The Insurance Dudes Podcast

Get more strategies like this on our podcast. Available on all platforms.

Related Episodes