World Cup stats fever - have you got the balls to win?
How to model your way to a result
But all this bears little resemblance to its modelling at the last World Cup. “We've a much greater idea of what is relevant, what works over a period of time,” says product director Andrew Dagnall.
Then there's some special sauce in its algorithms, backed up by its historical data which goes back to 1970. The company's strong suit is providing very fast analysis when team selection is announced, and in-play analysis (which is where the big growth is in betting markets). Bettorlogic has four 2.6Ghz quad-core processor systems, scouring for new data every 30 seconds, and crunching away to give in-play updates every five minutes, looking at 40 divisions around the world. The set up is C#, .NET Framework, and SQL Server 2008.
“We take a feed of a line-up when it's known producing a performance factor based on the starting 11, anyone else in the squad, in terms of the teams they're about to play,” says Falconer. “There are some other products which are not entirely dissimilar, but that look at much more granular data such as yards covered and shots on target. We're much more about what is the relationship between a player and the performance of a team.
“It's much more relevant from a betting and a manager's perspective.”
Falconer believes the work his business is doing in identifying performance patterns is pioneering.
“In-play, we can analyse any one of 40 divisions worth of games, whatever their frequency, every five minutes. Whenever a goal is scored, we can look historically at how that event/decision has played out in the past. No-one else is anywhere near that.”
A typical scenario would be investigating that if Arsenal were 1-0 at home after 30 minutes against a bottom five team, what historically has been the goal count, or by what margin do they typically win? “We're constantly updating that throughout the game,” says Falconer.
The modelling thows up interesting stats. Dagnall points out that, “When John Terry and Ricardo Carvalho play together in Chelsea's back four [they're defenders], the team score more goals.”
So, you scream, is all this information useable? Predictive modelling using sports and gambling data certainly has had its successes, starting with Blaise Pascal and and Pierre de Fermat's contribution to the theory of probability in the 1650s. More up-to-date was the success of the Oakland A's baseball team in 2002 which used statistical analysis and different criteria to those hitorically used to measure performance to find players undervalued by the market. From the late 80s the 'Hong Kong Syndicate' pioneered the use of modelling to exploit betting on the insular Hong Kong horse racing scene.
And in the measure that counts, BettorLogic's history shows between a 12 per cent and 16 per cent return on betting investment, at level stakes, if its customers follow its recommendations over time. But 'over time' is the key, and unquantifiable, point – like the stock market, this isn't going to be a smooth ride of consistent growth. And there are obvious recent examples of modelling fuck-ups in that arena.
But here's an interesting stat. Many bookmakers use part of Bettorlogic's full service as a stimulus for their customers to bet. In a four-month test using Betfair customers, 5,000 got access to the product, against a control group of the same size and profile, which didn't. The difference in betting volume was around 35 per cent, but these “informed” punters didn't win more money.
“They were not systematic enough or organised enough to follow a pattern the number of times to make it work,” says Falconer.
“We're not saying we're soothsayers. We're saying here's this situation modelled historically and reliably. Here's the necessary information delivered systematically on a massive scale, in real time.” ®
Sponsored: Hyper-scale data management