Most ‘Models’ in Sports Betting Are Fake... Here’s How to Tell | Presented by Kalshi

Most ‘Models’ in Sports Betting Are Fake... Here’s How to Tell | Presented by Kalshi

2025-12-03

 

The Truth About Why Everyone Says "My Model Makes This" (Tested) (For Beginners)

 

Do you ever scroll through sports betting content and feel completely lost because everyone talks about their "model" but no one explains what it actually is? It's frustrating when claims like my model makes this exactly what you need to win, yet you see zero math or process backing it up. The constant repetition of this phrase has turned it into the modern sports betting trust signal, replacing actual analysis. Today, we're stripping away the mystery surrounding my model makes this to understand what real models are, what they aren't, and why you shouldn't automatically trust anyone who throws the term around.

 

It’s easy to feel like you’re missing out when every analyst on YouTube, TikTok, and Twitter claims special insight derived from their proprietary system. But I want to show you that a model is just like any other tool. When used correctly, it's powerful. When used as window dressing, it tells you nothing. We're going to look at how to use these systems as genuine decision aids rather than just conversational filler.

 

Here's What We'll Cover

 

  • Defining exactly what a betting model is at its core
  • Why trend mining isn't actual modeling
  • The three essential ways serious bettors use their numbers
  • How to handle massive disagreements between your model and the market
  • Taking the guesswork out of situation spots like revenge games

 

What a Model Actually Is: Recipe, Not Crystal Ball

 

At its simplest, a sports betting model is just a structured way to convert data inputs into a single prediction. That's really all that happens. Numbers go in, and a number comes out. This output could be a projected point spread, an expected total score, or an estimated win probability. The critical element is consistency. The process must use the same defined rules and inputs every single time you run it. Think of it like a cooking recipe. Your ingredients are the statistics you value: yards gained per play, pressure rate against the quarterback, injury reports, and travel time. You decide the weight of each ingredient, mix it up using specific math, and what comes out is your prediction.

 

But here is the essential truth: A model is never a crystal ball. It’s an approximation of reality. Even the top-tier professional systems in the world are only slightly less wrong than the next best system on average. The goal in these efficient markets isn't perfection. The goal is simply to be a little less wrong than the general consensus or the market makers over the long run. When someone says my model makes this, you are hearing version one of their recipe, not an infallible truth delivered by an oracle. You must recognize it as a structured opinion.

 

Trend Mining Is Not Modeling

 

Before we look at real modeling tiers, we need to clarify what a model is absolutely not. I see so many people confusing trend mining with building a true predictive model. Trend mining involves querying large historical databases until you find a pattern that looks predictive based on past against the spread results. For example, perhaps you find that home underdogs on short rest after a win against the spread over 60% of the time since 2010. This is data slicing and dicing. Because historical records against the spread are incredibly noisy, you can find a spurious trend for literally anything if you look hard enough. You can find trends based on jersey color or the coach's middle initial if you filter the data enough times.

 

Real modeling is forward-looking. It attempts to define the underlying mechanism that causes a team to score or prevent points. It focuses on predictive process factors like efficiency, explosiveness, player matchups, and true injury impact. If your process doesn't consistently produce a predicted score, spread, or total before the game happens, it isn't truly a betting model. It’s just a sophisticated search engine for historical randomness.

 

The Hierarchy of Serious Predictive Models

 

When people discuss building systems, they usually fall into a few distinct tiers based on complexity and the market they are trying to beat. Understanding this hierarchy helps you gauge the seriousness of the content creator you are listening to, especially when they claim huge edges derived from my model makes this.

 

Tier 1: Entry Level Power Ratings

 

Many people start here. In this setup, every team is assigned one value, or power rating number. You subtract Team B’s rating from Team A’s rating, add in a home field adjustment, and you have your projected spread. This is a fantastic learning tool. It helps you understand how team strengths translate to point spreads over time and can uncover small value in very early lines or less liquid markets. However, in highly efficient and liquid markets like the NFL or NBA, relying solely on 32 basic numbers plus home field advantage is almost never enough to consistently beat professional bettors running deep infrastructure, proprietary models built over years, and massive data sets. You can't expect a simple rating system to overcome syndicated teams betting six figures per week.

 

Tier 2: Statistical Efficiency Models

 

This is where most serious recreational and professional bettors start focusing. Here, you move beyond simple aggregate ratings and incorporate efficiency metrics. This includes things like Expected Points Added per play, success rates, drive efficiency, and perhaps red zone completion rates. Crucially, serious Tier 2 models adjust these metrics for the strength of the schedule faced. You are attempting to translate past performance into a forward-looking projection of scoring margin. This is much closer to what competitive bettors use, but it’s often still just one part of the puzzle.

 

Tier 3: Player Level and Injury Adjustments

 

This is the truly professional level used to gain minor edges in markets with huge limits. It’s not enough to say Team X has a plus-4 rating overall. These systems drill down: What is this offense worth with Quarterback A versus Quarterback B? What happens to drive efficiency when the starting left tackle is out? What is the model’s output if the presumed starting cornerback is inactive three days before the game? At this level, the model must be extremely detailed. It combines baseline team efficiency with granular player-level adjustments based on usage, matchups, and injury status. Furthermore, continuous back testing and updating are required because the league itself changes scheme and personnel constantly. These pros are hunting for a fraction of a point of value, not pointing out 14-point discrepancies.

 

When Your Zero-Sum Model Disagrees With The World

 

One of the most alarming but common sights is when a new analyst proclaims their weekend project model says the market is completely wrong. For example, the market prices a game at minus 7, but their new calculation spits out minus 21. If this were true, that person would be sitting on the largest, most liquid edge in sports betting history. They could bet millions and every sharp bookmaker would happily take their action. That almost never happens.

 

When your brand new system screams a 14-point discrepancy in a mature market, the most likely explanation isn't that you suddenly cracked the secret of the universe. The most likely explanation is that *your model is broken*. You probably forgot to adjust for strength of schedule, you double counted an input, or you are using a noisy statistic that doesn't actually predict future outcomes. Humility is key here. Treat massive disagreements as a bug report, not as proof of your genius. Dig into what is causing that massive shift.

 

The Danger of Overriding Your Number

 

I often get annoyed hearing this exchange: "My model makes this minus 3, but the market is minus 1, so I am betting the underdog anyway." If this is your routine, then why build the model in the first place? The entire purpose of committing to a process is to remove the noise of gut feelings when game time arrives. You need consistency. If you just dismiss the output every time it conflicts with your vibe, the model is merely a conversational prop, not a decision making tool. That is the biggest trap in my model makes this culture.

 

It is perfectly acceptable to believe the model is incomplete. But the next step shouldn't be immediate overriding. The next step must be testing. If you suspect a look ahead spot matters, you must first define it mathematically. Is it a sandwich game? Is it when a huge favorite plays a divisional rival next week? Define it, then test if teams in that exact situation actually regress in meaningful metrics like EPA or scoring margin, not just against the spread noise. If you prove that element matters, you bake it into the model. Then, when the model spits out minus 4, that number *already includes* the look ahead factor. You are no longer guessing in the moment.

 

How to Actually Use Your Model Effectively

 

So, if we stop using the model as a shield or a soundbite, how should you be treating it? In my experience, a good predictive system serves three primary functions for the person building it.

 

  1. It acts as an Anchor. It is your mandatory starting point for evaluating a game. Your number is minus 3. The market is minus 1. Now the conversation starts: Why are we different? This forces a structured review instead of random opinion shifting.
  2. It forces Consistency. Without a quantifiable baseline, your opinions will naturally swing wildly from week to week based on recent wins or losses. A model ensures you treat situations that are statistically similar in the same financial manner. If two teams have identical underlying efficiency metrics, they should not be priced 10 points apart just because one team lost a teaser that burned you last week.
  3. It is your Experiment Lab. Have a theory about rest or weather? Great. Plug that data element in. Run the simulation. Does it move your prediction needle? If yes, that theory might be worth incorporating. If it doesn't move the needle meaningfully, trash the narrative and focus your energy elsewhere. This is constant refinement, not constant narrative stacking.

 

Common Questions About Betting Models

 

What Does Real Model Disagreement Imply?

 

If your number differs from the market by a tiny amount, maybe half a point or one point in a liquid market, that's where the investigative work begins. This small divergence suggests your model might be capturing something the market slightly missed or that you are over or under-weighting a factor just a little bit. This tiny edge is precisely where professionals operate. Dig deep to understand that small gap and see if you can refine your input weights. That is where profitability lives.

 

The Easiest Way to Start Today

 

If you are new and don't have a complex, regression-based system, start simple with linear weights. Assign 100 points to your most trusted statistic, like weighted team efficiency, which combines yards per play and success rate. Assign 50 points to your second tier stat, like turnover margin. Then add a simple adjustment for home field advantage, maybe 2.5 points. By weighting known predictable factors that cause scoring, you immediately have a basic calculation that produces a spread. This simple structure is far superior to just picking favorites that you "like."

 

Can I Stop Saying "My Model Makes This"?

 

Yes, absolutely. If your system truly is a complex, player-level analysis running proprietary machine learning algorithms, you can explain that once. But if you are just running a weekend query on a public database, stop pretending it is an original, world-beating algorithm. It is better to say, "Based on historical situational trends, I like this angle," than to use the term model as a placeholder for a vague process you don't want to explain.

 

What If The Market Is Right Most Of The Time?

 

That’s a sign that you are in an efficient market, and you should be thrilled! If your model has this for a game and the market aligns with it, that is the best signal possible. It means your structured thinking is confirming the collective wisdom of thousands of other sharp data points. In efficient markets, alignment is often a better signal than dramatic, unsupported disagreement.

 

How Long Does It Take To Build A Good Model?

 

There is no quick path to building an edge in major sports betting. The fantasy of building a supermodel over a single weekend loading up a few stats is just that—a fantasy. Think of it like building a business. The systems that compete against major betting syndicates have been iterated upon for years, cost thousands in data services, and employ full time quantitative analysts. Progress is incremental, involving constant testing, breaking, and rebuilding over multiple seasons.

 

Your Next Steps

 

We've established that model culture often relies on buzzwords rather than substance. Real modeling is hard, fragile work centered on creating better approximations of reality, not finding shortcuts. Your model should function primarily as an anchor for your research and a laboratory for testing concrete theories, which promotes consistency in your decision making.

 

If you are a recreational better, please stop feeling pressured to use a model. Bet within your means, have fun, and focus on smart decision making. If you are aspiring to build one, take it slow. Be brutally honest about what your inputs do. If your number screams a 10-point edge, assume your math is wrong first, not the entire world of professional betting. Go investigate that massive gap. I want to know what you think. Are you actively building systems, or do you think mentioning your model has become the easiest way to avoid explaining your actual thesis? Let me know your honest opinion in the comments section below. If this breakdown helped clarify the noise, please hit the like button and subscribe for more deep dives.



 

 

 

 

 

 

 

 

About Circle Back

 

To support Circles Back: Sign up for new sportsbook accounts using our custom links and offers. Click HERE.

 

Stay Updated: Subscribe for more Circle Back content on your favourite platforms:

 

Follow Us on Social Media:

 

🔨 Sign up to Kirk's Hammer

 

Scale Your Winnings With Betstamp PRO

Betstamp Pro saves you time and resources by identifying edges across 100+ sportsbooks in real-time. Leverage the most efficient true line in the industry and discover why Betstamp Pro is essential for top-down bettors.

 

Limited number of spots available! Apply for your free 1-on-1 product demo by clicking the banner below.

Episode Transcript

 

[00:00] My model makes this game minus four. My

[00:02] model loves this side. My model has this

[00:06] total three points higher. You hear that

[00:08] stuff everywhere now. It's in every NFL

[00:11] video, every Tik Tok, every Twitter

[00:14] thread. Everyone suddenly has a model.

[00:16] The guy betting $10 parlays has a model.

[00:19] This dude recording pics in his car, he

[00:22] has a model. Content creator model.

[00:24] Discord tout definitely a model. And

[00:27] look, I'm not anti-model. I use models.

[00:30] I like models. But at some point, you

[00:32] start asking yourself, do these models

[00:35] actually mean anything? What is a model

[00:38] really? And when does my model just turn

[00:41] into this magic word people throw out

[00:44] there so you don't question anything

[00:45] that they're saying? That's what I want

[00:47] to talk about today. Not in a super

[00:50] technical way, not here's how to code a

[00:53] model in Python, just what models

[00:56] actually are, what they aren't, and why

[00:59] my model makes it this has kind of

[01:02] become the new trust me bro in sports

[01:04] betting.

[01:11] [music]

[01:16] [music]

[01:17] So quick background. So this doesn't

[01:19] feel like some random rant. I'm Rob

[01:21] Pizzola. I run the Hammer betting

[01:22] network, a sports betting media company,

[01:25] which which includes this channel,

[01:27] circles off, and many others. I spend

[01:30] way too much of my life consuming

[01:32] betting content. Not just the stuff that

[01:34] we make here at the hammer, but

[01:36] everybody else's, too. YouTube,

[01:38] podcasts, Twitter, Instagram, Tik Tok.

[01:41] If someone is talking about sports

[01:43] betting, there's a decent chance and it

[01:45] ended up in my feed at some point. And

[01:48] over the last couple years, my model has

[01:51] kind of become the backbone of a lot of

[01:54] that content. My model makes this, my

[01:56] model loves that. It it's just the

[01:58] default justification. Now, you don't

[02:00] have to explain your process. You just

[02:02] say my model and you move on. Now, I

[02:07] want to be really clear. Again, I'm not

[02:10] anti-model at all. I love this stuff. I

[02:13] use models for my own personal betting.

[02:15] I'm what's often referred to as an

[02:17] originator. I use multiple models. I

[02:19] spend a stupid amount of time trying to

[02:22] make them better. Models are a big part

[02:24] of how I bet. So, this isn't a models

[02:27] are bad video. It's not a coding

[02:29] tutorial. It's not me flexing like, "Oh,

[02:31] my numbers are better than everyone

[02:33] else's." Think of this more as a state

[02:35] of the union on model culture. what

[02:39] models actually do, how they're supposed

[02:42] to be used, and how a lot of people are

[02:44] using them in ways that frankly don't

[02:47] really make sense.

[02:49] So, let's strip away the mystery for a

[02:51] second and just talk about what a model

[02:54] actually is. At the most basic level, a

[02:58] model is just a structured way of

[03:00] turning information into a prediction.

[03:03] Numbers go in, number comes out, that's

[03:06] it. That number might be I think this

[03:08] team should be minus three and a half or

[03:10] I think this total should be 46 and a

[03:13] half or this team scores X points on

[03:16] average. But underneath it that number

[03:18] is always calculated the same way. Same

[03:21] inputs, same rules, same process every

[03:24] single time. And that can be really

[03:26] simple. The classic example is a power

[03:28] ratings model. Every team has a number.

[03:30] You take team A minus team B. You add in

[03:33] some home field advantage and boom, you

[03:36] have a spread. Or it can be way more

[03:39] complex than that. You can use

[03:40] play-by-play data. EPA per play, success

[03:42] rate, drive efficiency, player grades,

[03:45] injuries, weather, travel, all these

[03:47] things rolled together with some math

[03:50] and you end up with a prediction. The

[03:52] way I think about it is like a recipe.

[03:54] Your ingredients are things like yards

[03:57] per play, success rate, pressure rate,

[03:59] injuries, rest, travel, whatever. You

[04:02] decide how much each ingredient matters.

[04:04] You mix them together in a consistent

[04:06] way. And what comes out of the oven is

[04:08] your number on the game. The spread, the

[04:11] total, or whatever you're modeling. And

[04:13] here's the key part. A model is not a

[04:16] crystal ball. It's just an approximation

[04:19] of reality. Even the best models in the

[04:22] world are just slightly better

[04:24] approximations than everyone else's. The

[04:27] whole goal is to be a bit less wrong

[04:29] than the market on average, not to

[04:31] magically land 10 points off on every

[04:33] game, and definitely not to be perfect.

[04:36] When you hear my model makes this,

[04:39] you're not listening to some infallible

[04:41] truth. You're listening to someone's

[04:43] recipe for the game. Now, before we talk

[04:46] about different types of real models, I

[04:50] want to hit something that confuses a

[04:51] lot of people. What a model is not.

[04:55] There are a bunch of products out there

[04:57] that will let you query a database and

[04:59] do what I'd call trend mining. This team

[05:02] is an underdog in a prime time game

[05:04] coming off of a win. Uh home dogs

[05:06] between three and seven points after

[05:08] scoring 24 plus last week are 18 and

[05:11] seven against the spread. People run a

[05:13] query like that. They say, "Hey, this

[05:15] angle is 72% against the spread over the

[05:18] last 20 years." Suddenly they think

[05:20] they've have a betting model. That's not

[05:23] modeling. That's just slicing and dicing

[05:25] past betting results. You're not

[05:27] actually predicting how good the team is

[05:29] going forwards. You're not telling us

[05:31] what that information means moving

[05:33] forwards. You're just filtering history

[05:36] until you find a pattern that looks

[05:38] cool. And the problem is against the

[05:41] spread records are incredibly noisy. If

[05:43] you cut the data enough ways, you can

[05:46] find a trend for literally anything.

[05:48] Teams wearing blue jerseys on short

[05:50] rest. uh coaches named Mike coming off a

[05:53] loss. Uh how a team fares against the

[05:56] spread when they flip from a favorite to

[05:58] an underdog. If your model is basically

[06:01] step one, type in situational angle.

[06:04] Step two, look at past against the

[06:07] spread record. Step three, bet on it

[06:09] because it's 61% historically. You don't

[06:12] have a bottle. You have a search engine

[06:15] for randomness. A real model is

[06:18] forward-looking. It tries to capture

[06:20] things that cause teams to score and

[06:22] allow points. Play efficiency,

[06:26] explosiveness, turnovers, injuries,

[06:28] matchups, coaching, whatever you can

[06:31] quantify, and then it turns into a

[06:33] prediction before the game is played. If

[06:36] you're not producing a predicted score,

[06:38] spread, or total with a consistent

[06:41] process, it's not really a model, no

[06:43] matter how fancy it looks.

[06:46] So, with that out of the way, what kind

[06:48] of models are people actually using when

[06:50] they're trying to do this? Seriously? A

[06:53] lot of people start with what I'd call

[06:54] an entry-level power rating model. Every

[06:58] team has a number. You subtract one from

[07:00] the other. You add in home field, and

[07:02] that's your spread on the game. Now,

[07:03] there's nothing wrong with this as a

[07:05] learning tool. It's a great way to

[07:06] understand how markets move to track

[07:09] teams over time. maybe find a bit of

[07:11] value in smaller markets or on very

[07:14] early openers if you're sharp and you're

[07:16] disciplined, but in a market like the

[07:18] NFL with big limits, a ton of sharp

[07:21] money involved, that by itself is not

[07:23] going to cut it long term. If your plan

[07:26] is, I have 32 numbers and home field for

[07:28] each team, I'm going to beat a room full

[07:30] of people that are running massive in

[07:33] infrastructure and years of data. That's

[07:36] just not realistic. Then you move up the

[07:39] ladder into a more statisticalbased

[07:41] model. Here you use things like expected

[07:44] points added per play, success rate, red

[07:47] zone efficiency, maybe drive level

[07:49] metrics, and ideally you're adjusting

[07:51] for strength of schedule. You're trying

[07:54] to translate how teams have actually

[07:56] played into how many points they should

[07:59] score going forwards. That's closer to

[08:01] what serious betters are doing. But even

[08:03] that is just one layer. On top of that,

[08:07] the pro stuff usually includes player

[08:09] level and injury concept concepts.

[08:12] You're not just saying team X has a

[08:14] plus4 rating. You're saying what is this

[08:16] offense worth with this quarterback,

[08:19] with this offensive line, these

[08:21] receivers? What happens when the left

[08:23] tackle is out? What happens when the

[08:24] quarterback room is down to backups?

[08:27] What if the quarterback is limited and

[08:28] he can't run? At a truly professional

[08:31] level, the model is extremely detailed.

[08:34] baseline team ratings, play-by-play or

[08:37] drive level efficiency models, player

[08:40] level adjustments for injuries and

[08:42] usage, situational tweaks for things

[08:45] like rest, travel, weather, maybe even

[08:49] how teams change play calling when

[08:50] they're leading or trailing. Constant

[08:53] back testing and updating so it doesn't

[08:55] fall apart when the league changes.

[08:58] That's the reality of trying to bet into

[09:00] big limits in the NFL or the NBA or any

[09:02] major sport. It's extremely competitive

[09:05] and the people you're up against are

[09:06] trying to account for as many predictive

[09:09] factors as possible. They're not just

[09:11] saying team A is a plus4, team B is a

[09:13] plus one. Even with all that, the edges

[09:16] you're fighting for are still small.

[09:19] You're not walking around with my model

[09:21] makes this game minus 21 when the market

[09:24] is minus 7 in the most liquid markets in

[09:26] the world. You're hunting for maybe a

[09:28] point of value here or there, maybe a

[09:30] little bit more in the right spots. and

[09:32] you're trying to do that over and over

[09:34] and over without the thing breaking.

[09:37] There's also this fantasy that I see now

[09:39] of the overnight supermodel. You open up

[09:42] your Excel on Friday, you grab a couple

[09:45] stats from a website, you plug them into

[09:47] a formula, by Sunday, you've built the

[09:49] thing that's going to crush NFL sides

[09:51] for the rest of the time. That's just

[09:53] not how this works in markets like the

[09:55] NFL. Look at who you're competing

[09:58] against. You've got people betting six,

[10:00] seven figures a weekend. You've got

[10:03] syndicates with full-time quants,

[10:05] developers, data feeds that cost more

[10:08] than a car. Models they've been

[10:10] iterating on for years. You've got

[10:12] longtime pros who have seen every trend,

[10:16] every fake edge, every new angle come

[10:18] and go. And even those people with all

[10:21] that infrastructure are still constantly

[10:24] testing, breaking, and rebuilding their

[10:27] models. They run into edges that

[10:30] disappear. They hit stretches where

[10:32] their stuff stops working and they have

[10:34] to adapt. They have losing weeks, losing

[10:36] months, maybe even a losing season. So,

[10:39] if you spin up a model in a weekend and

[10:41] suddenly it says, "Well, every favorite

[10:43] on the board should be 10 points higher

[10:44] than the market." The most likely

[10:47] explanation is not that you've just

[10:49] unlocked a secret that nobody else on

[10:51] Earth has found. The most likely

[10:53] explanation is that something in your

[10:55] process is off. Maybe you double counted

[10:58] something. Maybe you didn't adjust for

[11:01] strength of schedule. Maybe you're using

[11:03] noisy stats that don't actually predict

[11:05] the future. There are a million ways to

[11:08] accidentally create fake edges. The

[11:11] healthier mindset is if your brand new

[11:14] model is screaming that the entire board

[11:17] is wrong, assume your model is wrong

[11:20] first, not the world. Use that as a cue

[11:22] to dig in. Figure out what's broken.

[11:24] Treat it as a learning step, not a

[11:27] signal to fire into every single game.

[11:30] As a general rule of thumb, the more

[11:32] efficient the market, the more likely

[11:34] you are to be wrong. One of my biggest

[11:36] pet peeves with model culture is the

[11:38] this line that you hear all the time, my

[11:41] model makes this minus3, the market's

[11:44] minus one, but I'm actually going to bet

[11:46] the dog. Okay, then why do you have a

[11:48] model? Like genuinely, what what are we

[11:51] doing here? The entire point of building

[11:53] a model is to take everything you

[11:55] believe is predictive and lock it into a

[11:57] process so that when you get to a game,

[12:00] you're not just going off of vibes.

[12:02] You're going off of something

[12:03] consistent. If every time your number

[12:05] disagrees with the market, you just

[12:07] said, "Yeah, but I'm not feeling it."

[12:09] And you bet the other side, you don't

[12:11] actually believe in your model. You're

[12:13] just using it as a prop in the

[12:15] conversation.

[12:16] Now, it's totally fine to think your

[12:18] model is missing something. Maybe you

[12:20] think there's a motivational angle, a

[12:22] look ahead, revenge game, or whatever

[12:25] narrative you like. That's great. But

[12:27] the next step shouldn't be, I'm going to

[12:30] ignore my number and bet against it. The

[12:33] next step should be, let me test whether

[12:36] this thing I believe in actually

[12:38] matters. Can I define what a look ahead

[12:42] is in a way that I can go back and I can

[12:44] study it? Can I define what a revenge

[12:47] game is in a way that I can go back and

[12:50] study it? Can I measure how many teams

[12:53] in those spots have actually performed

[12:56] in meaningful stats like EPA, success

[12:59] rate, scoring margin, not just against

[13:02] the spread noise. If you find something

[13:04] real, you bake it into the model. You

[13:07] upgrade the process so that next time

[13:10] your number already reflects that angle.

[13:12] You're not guessing in the moment. Rufus

[13:15] Peabody, one of the best professional

[13:17] betterers that I know, talked about this

[13:19] years ago on Bet the Process, his

[13:22] podcast, and it really stuck with me.

[13:23] His basic point is, if you think

[13:25] something is important, figure out how

[13:28] to capture it in your model. Don't just

[13:30] talk about it. Don't just override the

[13:32] number on gut feeling. Either prove it

[13:35] and include it or admit that it's

[13:37] probably just a story you like telling

[13:39] yourself.

[13:41] Another version of this which kind of

[13:42] lives in the same neighborhood is the

[13:44] classic uh I make this game minus four

[13:47] but it's a look ahead spot so I'm

[13:50] passing on it. I see this all the time.

[13:52] And again if you're actually a modeler

[13:54] this is an opportunity to look into the

[13:56] data further. Every spot like this in a

[13:59] sport where you can test and improve the

[14:01] assumptions of your model. If you really

[14:04] like, if you really believe look ahead

[14:07] matters, your brain should immediately

[14:09] go to, okay, what is a look ahead spot

[14:12] exactly? Is it when a team is a big

[14:15] favorite this week and they play a

[14:17] divisional rival next week? Is it a

[14:20] quote unquote sandwich game between two

[14:22] tougher opponents? Is it certain ranges

[14:26] of point spread or certain parts of the

[14:28] season? You define it first, then you go

[14:30] back and you test it. Do teams in that

[14:33] spot actually underperform in anything

[14:36] that matters? Do they score less? Do

[14:38] they give up more points? Is their EPA

[14:39] worse? Does their success rate drop? Not

[14:42] just, oh, they're 46% against the spread

[14:45] on this angle because that's mostly

[14:47] noise. If you go back and find that,

[14:50] yeah, teams in this situation really do

[14:53] underperform in meaningful ways, then

[14:55] perfect. Bake it into the model. Maybe

[14:57] that's worth a small downgrade to the

[14:59] team's rating in those specific

[15:00] situations. Maybe it's worth a fraction

[15:03] of a point off the point spread.

[15:05] Whatever the number is, now your model

[15:07] knows about look aheads. So when you get

[15:10] to week 10 and you say, "I make this

[15:12] minus4," you're not saying, "I make this

[15:15] minus4, but uh look ahead spot. I'm I'm

[15:18] going to pass." You're saying, "I'm

[15:20] making this minus4," including the look

[15:22] ahead spot. You're not guessing on the

[15:23] fly anymore. The work is already done.

[15:25] It's in the number. Another one that

[15:27] makes me shake my head is this. Market

[15:29] has team A at minus 7. Content guy comes

[15:32] along and says, "My model makes this

[15:34] game minus 21." Okay, let's just walk

[15:38] through what that actually implies. If

[15:40] that were true, the real number on this

[15:42] game or minus 21 and the entire world is

[15:45] hanging minus 7, you are sitting on one

[15:48] of the biggest edges you will ever see

[15:51] in one of the most liquid markets on the

[15:54] entire planet. You could in theory keep

[15:56] betting that edge all the way up and

[15:59] every sharp book on earth would be

[16:01] begging you to take more action. That

[16:04] almost never exists in the NFL or the

[16:06] NBA or any mature market. Now you might

[16:09] be directionally correct. Maybe minus 7

[16:12] is cheap. Maybe the true number is - 7

[16:15] and a half or - 8, even - 9. But a

[16:18] 14-point disagreement between your model

[16:21] and a heavily bet market is not usually

[16:23] evidence that you've broken the code. It

[16:26] is almost always evidence that something

[16:29] in your model is broken. And this is

[16:31] where humility matters. Treat those

[16:33] giant edges as a bug that you need to

[16:35] investigate, not proof that you're a

[16:37] genius and everyone else is an idiot. So

[16:40] with all that said, how should you

[16:42] actually use a model? To me, there are

[16:45] three big things. Number one, it's an

[16:47] anchor. It's your starting point for

[16:50] what a game should be. Not a dictator,

[16:52] not something you blindly follow no

[16:54] matter what, but the baseline you work

[16:56] from. Okay, my number is minus3. Where

[16:59] is the market? Why are we different? It

[17:02] frames the entire conversation.

[17:04] Second, it gives you consistency.

[17:06] Without a model, it's really easy for

[17:08] your opinions to swing all over the

[17:10] place from week to week. One week you

[17:12] love a team, next week you hate the

[17:14] team. You can't really explain why a

[17:17] model forces you to treat similar

[17:18] situations similarly. If two teams have

[17:22] played basically the same, they

[17:24] shouldn't be priced 10 points apart just

[17:26] because you're mad that one of them

[17:27] burned your teaser last week. And third,

[17:30] and and what is the main thing I'd love

[17:32] for you to take away from this video?

[17:34] It's an experiment lab. You have a

[17:37] theory about rest or travel or penalties

[17:39] or weather, whatever. Great. Plug it in,

[17:42] test it, see if it moves the needle on

[17:44] your prediction. If it does, keep it. If

[17:47] it doesn't, trash it. You're constantly

[17:50] refining instead of just stacking

[17:52] narratives. At the end of the day, your

[17:54] model shouldn't be a buzzword or a

[17:56] shield you hide behind. It's just a tool

[17:58] to help you make slightly better, more

[18:01] consistent decisions over hundreds and

[18:04] thousands of bets. That's where it

[18:07] really matters. So, just to wrap this

[18:09] up, I am very pro model. I think they're

[18:13] one of the best tools you can have as a

[18:14] better. What I'm against is the model

[18:18] cosplay that's taken over the space.

[18:20] Throwing around my model as a magic word

[18:24] with nothing behind it. Real models are

[18:26] powerful, but they're also hard. They're

[18:29] fragile. They're always wrong to some

[18:32] degree. The name of the game is trying

[18:34] to make them a little bit less wrong

[18:37] than everybody else's. If you're a

[18:40] recreational better and you're watching

[18:41] this, it is completely fine if you don't

[18:43] have a model. You don't need one to

[18:46] enjoy betting sports. You don't need to

[18:48] pretend you have one just because

[18:50] everyone on YouTube is saying, "My model

[18:52] makes it this." Bet small, have fun, try

[18:55] to make good decisions. That's enough.

[18:58] If you're an aspiring modeler, my advice

[19:01] is this. Take it slow. Test things.

[19:05] Break your own ideas. Be honest about

[19:07] what your numbers can and cannot do. If

[19:10] your model disagrees with the market by

[19:11] a little, great. Dig in. If it disagrees

[19:14] by a ton, assume you mess something up

[19:16] and figure out why. For all of you

[19:19] betting out there, I'd actually love to

[19:21] hear what you're using. Do you have your

[19:23] own numbers? Are you thinking about

[19:26] building something? or do you think

[19:28] model has just become a buzzword people

[19:30] are hiding behind right now? Let me know

[19:32] down in the comments below. I do love

[19:34] reading all the comments on this channel

[19:36] every single week. And if you found this

[19:37] video helpful, hit the like button,

[19:39] subscribe to the channel, share it with

[19:41] someone who's always talking about their

[19:43] model, and we'll do more videos like

[19:45] this, perhaps diving a little bit deeper

[19:48] into modeling in the future. Thanks for

[19:50] watching.





Betstamp FAQ's

How does Betstamp work?
Betstamp is a sports betting tool designed to help bettors increase their profits and manage their process. Betstamp provides real-time bet tracking, bet analysis, odds comparison, and the ability to follow your friends or favourite handicappers!
Can I leverage Betstamp as an app to track bets or a bet tracker?
You can easily track your bets on Betstamp by selecting the bet and entering in an amount, just as if you were on an actual sportsbook! You can then use the analysis tool to figure out exactly what types of bets you’re making/losing money on so that you can maximize future profits.
Can Betstamp help me track Closing Line Value (CLV) when betting?
Betstamp will track CLV for every single main market bet that you track within the app against the odds of the sportsbook you tracked the bet at, as well as the sportsbook that had the best odds when the line closed. You can learn more about Closing Line Value and what it is by clicking HERE
Is Betstamp a Live Odds App?
Betstamp provides the ability to compare live odds for every league that is supported on the site, which includes: NFL, NBA, MLB, NHL, UFC, Bellator, ATP, WTA, WNBA, CFL, NCAAF, NCAAB, PGA, LIV, SERA, BUND, MLS, UCL, EPL, LIG1, & LIGA.
See More FAQs

For more specific questions, email us at contact@betstamp.app

Contact Us