Rec’ing on…The Ranking Project (2)
This outing we’re going to look at the quality of a game win/loss. (For those of a geeky tendency, to speed things up I exported my relevant queries and I’m processing them "brute force" in Perl. It’s always been true in programming–first get it to run, then make it elegant.)
It’s long been my feeling that, in most cases, simply recording a win as a win is a disservice to the teams involved as it gives an incomplete picture. Close games reflect evenly-matched teams (at least on that day). When strength of schedules are computed, this sort of distinction is important (i.e. are teams scheduling within their ability to have a reasonable chance of winning).
Let me give an extreme example that could factor into a tournament bracket. Let’s say you have two teams that play identical schedules of thirty games with no games against each other. Team A ends up with a record of 15-15, all of its wins by between 3 and 9 points, and all of its loses by one point in overtime. Team B ends up with a record of 19-11, all of its wins by 1 point, all of its loses by fifteen or more. Team A is consistent. Team B has more wins. Team A doesn’t lose easily, taking all those opponents to overtime. Team B either eeks out a win, or loses big. Let’s say only one of these teams can have an at-large bid into the tournament…which one gets it?
You, of course, see the problem with having humans try to evaluate this dilemma. So, I propose a game formula that goes something like this for winners:
And this for losers:
(Where:
OT = an overtime game
wq = win-quality value
P = the point difference in a game [always a positive number];
S = the "Sweet Spot", a point value considered to be an expected point difference between teams that are comparable matched.)
The philosophy is this…both teams start with a point value of 1.000, and that is adjusted up or down based on a logarithm of the point ratio. A loga-what? Basically it’s a sliding scale that effects the game value less and less as the game becomes a greater blowout. The reason winners and losers get two formulas is my determination that there should be an ideal point margin for a game that is neither "too close" (i.e. a function of luck), or "too one-sided." This will give the winner a value of 1.1 and the loser a value of 0.9. (For the purposes of these articles, I chose S=6.5, which factors as either three 2-pt field goals, or two 3-pt goals.)
I’d also add that for overtime games the winner formula stays in effect, but the losers value simply becomes 1.0….an almost-win because the teams did tie at the end of regulation. I think they merit this reward for playing a team even, and shouldn’t be penalized simply because ties aren’t allowed. (For the examples in these articles, this OT calculation never enters into it as that information wasn’t in the data I acquired.)
Well, that’s all great and all. Looks fancy. But what does it do?
Fair question. Below is a table comparing the straight win/loss method usually used versus the quality of win/loss method:
Rank School W/L % School W-Qual Pos.
Ave Diff
01 North Carolina 0.9667 North Carolina 1.1475 0
02 Bowling Green 0.9333 Duke 1.1361 +5
03 Ohio St. 0.9333 LSU 1.1340 +1
04 LSU 0.9000 Bowling Green 1.1245 -2
05 Chattanooga 0.9000 Ohio St. 1.1220 -2
06 Hartford 0.8966 Connecticut 1.1184 +3
07 Duke 0.8966 Tennessee 1.1143 +3
08 Oklahoma 0.8788 Chattanooga 1.1125 -3
09 Connecticut 0.8788 Maryland 1.1123 +2
10 Tennessee 0.8750 Hartford 1.1115 -4
11 Maryland 0.8750 Rutgers 1.1084 +3
12 Sacred Heart 0.8667 Sacred Heart 1.1025 0
13 Louisiana Tech 0.8667 Louisiana Tech 1.1013 0
14 Rutgers 0.8621 Oklahoma 1.0994 -6
15 Indiana St. 0.8387 Baylor 1.0987 +5
16 Brigham Young 0.8333 Indiana St. 1.0979 -1
17 Tulsa 0.8333 DePaul 1.0963 +2
18 Liberty 0.8333 Utah 1.0941 +6
19 DePaul 0.8065 Stanford 1.0934 +13
20 Baylor 0.8000 Tulsa 1.0917 -3
Clearly, if you have a high win percentage, your win-quality average will reflect that. The change now is the weighting given to margin of victory or loss. Witness the huge jump made by Stanford (from 32 to 19) the six-place jump of Utah, and the five-place jumps made by Duke and Baylor. Using a very simple formula, without even considering the strength of opponents, we’ve already made our rankings look a little more like how the NCAA tournament was selected. Not too shabby for a first step…we might just be on the right track.
OK. So now we have a foundation on which everything else can be calculated. Next time we’ll start looking at the quality of opponents.
Leave a Reply