Is Betting Good

Posted : admin On 3/18/2022

Gambling most definitely is focused on the love of money and undeniably tempts people with the promise of quick and easy riches. WITH THE boom in betting comes the corrupting consequences: deep indebtedness, despair, criminal behavior, and many other tragic repercussions that affect individuals, their families and society. Sometimes you just have to accept some losses and wait for the good times to roll. Having said that, Bet Alchemist does a good job of keeping losses at a minimum. By providing a lot of each way bets, this tipster stops the losses from building up. He also do a good job at finding value bets at attractive odds.

The way bettors process information is important to their success. What is binary bias? What can YouTube and the Baltimore Ravens tell us about betting psychology? What is a good bet? Read on to find out.

What is binary thinking?

Binary thinking involves sorting information into mutually exclusive options, not unlike the way a computer thinks in binary code. Something is either a 1 or a 0 and those are the only two options. There is no grey area.

Many argue that humans instinctively sort information in this kind of way, naturally jumping into this kind of binary method of thinking.

For primitive humans this made sense. The kind of judgements that needed to be made to survive lent themselves well to such a way of thinking, especially when it came to quick decision-making. Decisions such as whether a rustle heard in the bush is a predator or non-predator were life-or-death ones.

The reward offered by spending valuable time weighing up the information available about the sound (whilst a predator could be preparing to strike) is not worth the risk of being eaten. Simply categorising the rustle in the bush as a predator and fleeing makes much more sense from a risk vs reward perspective.

Richard Dawkins claims such a desire for straight yes-or-no solutions to neatly categorise information is “The tyranny of the discontinuous mind”. He suggests that people seek the reassurance of an either-or classification because it’s much easier for the brain to think in binary, as our distant ancestors did, rather than consider the shades of grey between two conclusions.

This kind of binary decision making is perfectly fine for basic snap decision making, but we now live in a world of nuance. Nowhere is this reflected more acutely than in the world of betting.

Binary bias: caffeine and YouTube ratings

How does binary decision making affect the way we process information?

Fisher and Keil set out to ascertain this in a series of studies on what they called “binary bias”. For these studies, participants were given evidence about a topic, before being asked to summarise the evidence and give a rating that best captured their overall impression of the strength of the argument.

Overall they found that: “Across a wide variety of contexts, we show that when summarizing evidence, people exhibit a binary bias: a tendency to impose categorical distinctions on continuous data. Evidence is compressed into discrete bins, and the difference between categories forms the summary judgment.”

Odds movement, new lines, competitions and more

In other words the participants tended to ignore the relative strength of the evidence presented to them, instead favouring categorising them into discrete categories and looking at the sum total of evidence within those categories.

This stripped out all the continuous data. As a result, a conclusion with a 25% likelihood in one direction was simply bucketed with all the conclusions that leaned in that direction regardless of their strength. This made the data easier to process for the test subjects but meant that the value of the information diminished.

YouTube discovered this whilst trying to refine their rating system for videos. Their star ratings proved to be ineffective since the vast majority of votes were either for one star or five star.

This was a consequence of binary decision making. If the user likes the video they categorise it as a five, whilst if they didn’t like the video then they categorised it as a one. All of the information in the middle of these two discrete categories was lost. This resulted in YouTube switching to a simpler thumbs up/down system.

Outcome bias

As shown above, humans prefer to sort information into two distinct categories where possible. This is also the case within betting.

To an inexperienced bettor, a good bet is simply one that wins. A bad bet is one that loses. Those two buckets are easy to grasp and make intuitive sense to somebody without a good grasp of the nuances behind betting.

This however, is completely false. A winning bet can be a terrible bet whilst the best bet ever placed may have turned out to be a loser. By categorising bets in such a simple way, all of the useful information gets stripped away.

This desire to attribute a data point into a “good” or “bad” category due to the outcome of an event was shown during the debate around the Baltimore Ravens’ failed two point conversion attempt from the 2019 NFL season.

Mathematically, the decision to go for the two point conversion was the correct one by the Ravens. However, because the attempt failed, some pundits categorised the call into the “bad decision” bucket.

The extra information given by the analytics behind such a play was removed for these pundits due to a mixture of outcome bias (a failed attempt must have been caused by a poor decision) and binary bias (the need to place the play into a distinct category). Had the play proved succesful their opinions would, in all likelihood, have been different.

What is a good bet? Thinking like a bettor

In order to get into a successful betting mind-set the bettor must learn to avoid such biases. The grey area between win and lose is what distinguishes a good bet from a bad one.

Bettors work in percentages. If the bettor’s percentage is more accurate than that of the bookmaker’s he will win in the long run. But is it even possible to ascertain whether the bettor’s percentages are even accurate?

Without a large sample size it is almost impossible to answer that question definitively.

Take one famous percentage figure as an example. Statistics website FiveThirtyEight gave Donald Trump a 30% chance of winning the 2016 US presidential election. Of course, Trump went on to become president.

Betting
  • Read: How to test the credibility of a tipster's record

The reaction to this prediction from some quarters was to label it as “wrong”. Given the binary approach people take to such things, you can see why it would be tempting to do so. As the work on binary bias done by Fisher and Keil showed, people removed the weakened strength of the prediction (Trump being awarded 30% chance instead of 0% chance) to place the prediction in the “wrong” category they are comfortable with.

But this is obviously nonsense. According to the prediction, Trump should win three times in ten. The fact the scenario played out to become one in which Trump won shows us nothing new about the accuracy of the prediction.

The sample size would need to be extended to a meaningful level by running the same election repeatedly (which is of course impossible). Only then could we see how close FiveThirtyEight’s prediction of 30% Trump wins was to reality.

Controlling the chaos

This is understandably disconcerting. It goes against our instincts to say that we actually don’t know and may never know whether an individual prediction was a good one.

There have certainly been bets I have placed where I intuitively felt the percentages were in my favour, but outside of a model run across a large sample of similar events, there is no way to definitively say that I was correct.

As bettors we are operating in that grey area between the “good” and “bad” bet buckets. To be successful you have to step away from easy classifications and embrace the percentages on an individual bet for what they are. Simply attempts to create a “good” bet with the knowledge that we may never truly know whether we can ever classifiy them as such.

Some bettors defer the betting selection process and use tipsters. Unfortunately, it's easy to confuse whether a tipster's track-record shows a statistically significant predictive ability, or just a good run of luck. This article explains how to judge a tipster using survivorship bias.

For example, there'€s a tipster that, over five years, had a 100% track record of predicting tennis matches with an even probability of success -€ making $50,000 profit. Impressive, right? Not if you discovered that the star tipster was, in fact, a monkey.

Let's say we run a simulation which sees 10,000 tennis tipsters (or monkeys, it really doesn'€t matter) each with a 50% chance of either making $10,000 a year or losing $10,000 a year. If any tipster has a losing year, they are eliminated.

The tipsters/monkeys make their predictions by simply pushing one of two buttons. If we run the test for one year 5,000 of our tipsters would be $10,000 in profit and the same number $10,000 in the red and binned. In year two we would have 2,500 monkeys with perfect record and if we keep going by Year 5 we would have 313 monkeys from that original cohort that would statistically be able through pure luck to make successive accurate predictions and $50,000.

Confusing survivors with savants

This phenomenon is called survivorship bias, and it has huge significance in the real world of tipsters, because the successful tipster currently topping the tipster league table on hottips.com may just be a lucky monkey pushing a button.
What are the important factors that influence this process? The size of the original sample is critical. If you just focus on the winners in this process, ignoring all the other billions of monkeys producing gibberish, you'€re being fooled by randomness. The simple fact is that by starting from a large enough sample, some of the participants will end up looking like a savant by pure luck.

On a very basic level, a good judge of a tipster would be whether they use Pinnacle.

The other critical factor is the probability of the event. Our example used a fair coin toss (50/50 chance of heads or tails), but in the real world a bookmaker will hold an edge. Re-running our test with higher margins produces fewer lucky winners: the lower the margin, the easier it is to achieve long-term success.

On a very basic level, a good judge of a tipster would be whether they use Pinnacle. Our odds are proven to be the best, so if they don'€t use us, they clearly don'€t know their stuff.

A clever illustration of survivorship bias

There are plenty of great examples of survivorship bias, but one particularly clever stunt by the famous English illusionist Derren Brown in a 2008 programme called '€The System' cleverly illustrated how deceiving it can be.

The show was based around the idea that a system could be developed to 'guarantee a winner' of horse races -€ a claim regular bettors will be accustomed to. The show followed Khadisha, to whom Brown anonymously sent five correct horse race predictions in a row. There was no trickery at work either, the predictions were fair and accurate and the programme built towards a climax focusing on a sixth and final prediction where, confidence boosted by Brown's 100% tipping record, Khadisha invested $4,000 of her own money... and lost.

The fundamental lesson for sports betting is that anyone can hit a lucky run, and the more improbable something is; the bigger role luck has.

Of course there was no system; Khadisha was simply the product of survivorship bias.

Brown had actually started by contacting 7,776 people (sufficient sample size), and split them into six groups, giving each group a different horse from a six-horse race. Note the number of variables is as equally important as the number of predictions in how quickly the original sample reduces in size.

After each race 5/6 of the people had lost and were dropped from the system (like our failing monkeys) and equal proportions of the survivors randomly sent another selection. Kadisha happened to be the ultimate survivor, winning five times in a row.

The fundamental lesson for sports betting is that anyone can hit a lucky run, and the more improbable something is; the bigger role luck has. If your typewriting monkey produces the Complete Works of Shakespeare from a sample of several billion, don'€t get too excited. If he repeats the feat, however, take a closer look.

A simple formula to evaluate a tipster'€s abilities

A simple way to evaluate a tipster'€s true abilities is to take the square root of the total number of selections and add that number to one half of the total plays made:

Is Sports Betting Good Or Bad

ˆš (No. Selections) + ½ (Total Plays Made)

Is Betting A Good Way To Make Money

For example, if he has 400 tips, the square root would be 20, which added to one half of 400, gives a total of 220 theoretical wins.

If the tipster is 20 selections above 200, he is two standard deviations above average. There'€s about a 1 in 40 chance of a 50% handicapper doing that. So a player with 400 selections would need to go 220-180, or 60-40 with 100 selections to be this rare.

Is Betting Good

Without being a master statistician, you can quickly see that the more selections you can view, the easier it is to evaluate a player. In many cases, it's safer to follow someone with a lower winning percentage if they have a lot more plays.