« CEO Pay | Main | The Case for Kevin Garnett »



That sounds like a great book. I've often wondered why some teams make the decisions they make.

For instance, why would Atlanta give up two first round picks and a very good prospect (Diaw) for Joe Johnson, a slightly better than average talent? I'm surprised Atlanta hasn't fired that GM yet. I'm also surprised the league allowed the deal to be consumated.

I've played fantasy football for years. In all the leagues I've played in, there is a rule that states that any trades deemed stupid can be disallowed. There doesn't have to be any proof of collusion between owners; there just has to be an assessment (by commissioner or vote of owners) that a particular trade is stupid.

For instance, you can't trade Peyton Manning and Clinton Portis for Fred Taylor and Drew Bledsoe. You simply can't. And if you try, it will be vetoed. Stern should have vetoed the Atlanta/Phoenix trade last year.

There has to be a process put in place to prevent teams from doing anything so stupid it undermines the integrity of the league. If the NBA can't allow the commish to veto trades, it should at least mandate every team owner and GM read this book. Anything that prevents teams from giving up two lottery picks for an overweight, underachieving center with a heart problem is well worth it.

Oh, and ban Kevin McHale and Isiah Thomas and Atlanta's GM from ever running another NBA team. That would help, too.


This sounds very similar to the WinVal that Wayne Winston and Jeff Sagarin created. Winston and Sagarin may not have a book about this, but they've been putting it to work in the real world with the Dallas Mavericks. So who came first?

A quick google for "Wayne Winston" shows this article from the Washington Times:



I would echo the previous post in that what I find strange in both the review and the comment in the sense that this is revolutionary work.

There has been something of an explosion of rational analysis in the NBA in the past few years, in the wake of the Moneyball era in baseball.

Many teams have added rational analysts to their payrolls, most notably the Rockets, with the hiring of Daryl Morey as their GM-in-waiting.

Many others have done similar work in developing a comprehensive basketball metric in recent years, most notably ESPN.com's John Hollinger, who has had his Player Efficiency Rating (PER) for several years in his Pro Basketball Forecast/Prospectus books.

After reading the Wages of Wins blog, I don't get the sense that these cats actually temper their algorithm-generated opinions based on observations of games. Basketball is too interdependent of a game -- its numbers will never tell the story as definitively as baseball's, with its discrete events.

To me, the ideal player evaluator is at a happy medium between understanding gestalt and understanding algorithm.

All that said, I was thrilled to see an NBA story of any kind in the New Yorker. In my dreams, Malcolm would write a Roger Angell-style recap at the end of each season.


Does the algorithm take into account having Isiah Thomas as a general manager?


I haven't read the book, but my suspicion from the results is that the analysis does not take into account what alternatives are available when punishing them for turnovers and poor shooting percentage.

It's not good if Allen Iverson shoots 40%, but if the alternative is giving it to a guy who would shoot 35% if he got those same shots, then it's not so bad.

Kevin Pelton

Kaan, besides being all-encompassing ratings, there is very little in common between WinVal and Berri's methods.

WinVal looks eclusively at plus-minus data, adjusted for the quality of a player's teammates and the opposing lineups he faced.

Berri's method strictly makes use of individual statistics, giving them different weights than Hollinger's PER or the NBA Efficiency system.

They are two very different ways of looking at the same issue.


I can see that Steve Sailer is totally giddy about managing to find a way to give you yet another backhanded compliment. He might need a new pair of pants.

ProTrade ( http://www.protrade.com/ ) is founded on finding an effective way at valuing an athlete's contributions towards his team's success. I must point out that i worked there for a short amount of time, but i was a fan before i became an employee (and still am after we parted ways).


Malcolm, I beg you to reconsider your statement regarding Bill Simmons as the best sportswriter. I would say he used to be the best sportswriter, but in my opinion rehashing the same 80's movie or pop culture lines again and again gets boring. Bill Simmons needs to come up new material, unless his target demographic is the lowest common denominator.


One of the hardest things about assessing basketball players is disentangling the effects of the different players on the floor. If a player gets an offensive rebound and the putback, how much should we credit the guy who drove to the basket, broke down the defense, and missed the shot, but put the defense out of position for the rebound? When someone scores, how much credit should he get, how much should go to the guy who got him the ball, how much to the guy who set the screen, how much to the perimeter shooter whose defender was hesitant to come over and help, etc., etc.? When the shot clock winds down and someone who can create his own shot is forced to put up a tough shot, how do you spread blame among the teammates who were unable to create a good shot during that possession?

I haven't read the book, but from Gladwell's piece and post I don't see any evidence that the Win Score is successful at disentangling teammates' contributions. The evidence that they have for the accuracy of the Win Score measure is that, if you add up the Win Scores of all of the players on a team, then the total is close to the team's actual number of wins. But that doesn't tell you if the right player got credit for the things that helped the team win or if the right player gets the blame for the team's flaws. If Steve Nash make the Suns offense a lot better, but a big part of the impact of what he does on the floor shows up in Shawn Marion's scoring statistics instead of in his own offensive stats, then those Win Shares are still going to show up in the Phoenix players' total, even though the algorithm erroneously credited them to Marion instead of to Nash. Similarly, we really don't know if Garnett does such an enormous amount to help his team win while being surrounded by a sorry bunch of teammates, as the Win Shares algorithm suggests, or if he just has a style of play that makes his own statistics look better while his teammates' stats look bad, in accordance with the conventional wisdom that he doesn't make his teammates better. If Dennis Rodman's lack of offense is making it harder for Jordan and Pippen to get good shots, then Rodman's detrimental effect on the Bulls offense is going to show up in part in Jordan's and Pippen's Win Shares (and if the threat that Rodman poses on the offensive glass is making it harder for teams to defend the rest of the Bulls, then that positive effect on the Bulls offense is going to show up in part in the other players' Win Shares).

In other words, it's not clear how reliable the Win Shares statistic is at the kinds of interesting uses that Gladwell wants to put it to - assessing potential MVPs, comparing teammates, identifying overrated and underrated players, and so on. I think that it's necessary to keep track of plus/minus data, like WinVal does, in order to disentangle teammates.

Mike Bennett

I will have to check out this book, with reservations. I'm a big fan of John Hollinger's work and will definitely compare the two. One commenter noted that one of the authors published a paper revealing Dennis Rodman to be much more valuable than Michael Jordan on one of the championship Bulls teams. This seems to be indicative of a major flaw in the system.

Based on the list of underrated players, it appears that the algorhythm may give too much weight to players who have one or two dominant skills, while the rest are average. I'm a Chicago Bulls fan, and while Tyson Chandler is a quality rebounder and shot blocker, it's quite possible that if he and Ben Wallace played a game of 1 on 1 to 11, it would take 5 hours to complete. Likewise, Chris Duhon's major skills are a good number of assists per minutes played and a low turnover rate. He can't create his own shot and has an abysmal mid-range jumper, allowing his defenders to sag off him and clog up the Bulls' offense. He is an average player at best. And some of the other players on that list are three point specialists.

The most striking name on the overrated list is Carmelo Anthony. Not because he's one of the 10 best players in the NBA -- but a lot of his value is in his ability to get to the line. Indeed, that's also one of the things that makes Allen Iverson more valuable than his low shooting percentage indicates (that, and his low turnover rate).

Based on this limited information, I'm not so sure these algorhythms firmly correlate with what wins and loses basketball games.

chris m

According to the wages of win website, Dwayne Wade produced 18.2 wins and Shaquille O'Neal produced 8.5 wins. Even adjusting for the fact Wade played more, their system says Wade was a better player this year.

Miami was 52-30
w/o Wade Miami was 4-1
w/o O'Neal Miami was 10-13

Anybody with any basketball knowledge knows that O'Neal is Miami's most important player.

The wages of win system may be able to accurately reflect a team's total wins, but that doesn't mean it is dividing those wins among the players correctly. That's where common sense comes in.

Take Jerome Williams. We are supposed to believe that nobody in the NBA can tell the difference between a journeyman and the best of his generation? Moneyball mainly dealt with the difficulty of evaluating unproven amateurs. It didn't go on and show that people in baseball couldn't recognize the difference between an average player and a great player.

If someone comes up with a system that shows a player like Jerome Williams is great, maybe he should use some common sense and reevaluate his system, instead of assuming he is a genius and that people in the NBA don't have a clue.


Read the article last night and was happy to see this blog topic this morning. I'm always astounded when this kind of analysis is questioned in sports ... when it is already applied to much more complex risking and optimization scenarios such as massive JIT system management and global oil & gas portfolio management.

The risk/optimization analysis done in these two examples (JIT and o&g) includes a mind-numbing range of factors -- genetic algorithms to the rescue :-). In the o&g portfolio scenario, these consideration factors can be as diverse as: political regime instability; personal favourite areas of the world for Bubba the CEO; exploration risk factors; environmental compliance requirements; revenue promises to shareholders; track records of senior geotechnical analysts; long-term production requirements, etc. etc. etc.

In comparison, accounting for b-baller 3-point skills and turn-over rates seems manageable.

That said ... Malcolm, what would a hard-core "Blink-er" say about this topic?

chris m


w/o Wade Miami 4-1
w/o O'Neal Miami 10-11
w/o both Miami 0-2


I see two major problems with Wages of WIns:

1) On every team, somebody has to take the shots. Not every player can do the "little things", and that player who takes the shots shouldn't necessarily be punished for it.

and more importantly,

2) This analysis is handcuffed by the current statistics available. What would be truly revolutionary would be to watch game film and find stats based on effective vs. in-effective behaviors.

For instance, I'd like to see stats for 1-1 defense to see what percentage of the time a defender forces his man to pass or lets him drive by him or creates a steal or block, etc. There are tons of possible stats out there that aren't counted that actually could discern who's under and over-rated.

chris m

Also, the formula which is the basis of their system, Win Score, is not that unique:

Points + Rebounds + Steals + ½Assists + ½Blocked Shots – Field Goal Attempts – Turnovers - ½Free Throw Attempts - ½Personal Fouls

About 20 years ago Dave Heeren came out with his Basketball Abstract. The forumla which is the basis of his system, Tendex, is:

Points + Rebounds + Steals + Assists + Blocked Shots - Missed FGA - Turnovers - Missed FTA

The differences in the two formulas result in Iverson's 2000-01 season being in the top 15 with Tendex versus 91st with Win Score.


what the stats don't take into account is the type of system the teams utilize. Would Steve Nash be voted MVP 2 years in a row if he plays in Detroit? He's a finesse offensive minded player, not a hard-nose defense first player


I want to make three observations about the statistical efforts to measure a basketball players’ contributions to a team’s wins.

One, all shots are treated equally. It is easy to separate 2- and 3-pointers. However, a player should convert at a higher rate shots taken 2 feet from the basket compared to shots taken 12 feet from the basket. Yet, none of these analyses take this crucial fact into account.

Two, while the NBA has become a hot sampling domain, the Wall Street Journal recently published an article about the so-called Moneyball GMs in the NBA. In short, these GM fare worse than their cohorts in MLB because the NBA analysis cannot determine where the action happened on the floor unlike in baseball where the 3rd baseman is in the same position for every game. A guard may be on the weak side for one possession and strong side for the next possession.

Third, and finally, these analyses cannot measure the observable. These analyses are formative in nature, and, thus cannot account for intangibles. Why is Michael Jordan the greatest ball player? Because of the intangibles. But the so-called Moneyball analysis cannot tell you why Jordan is greater than Magic Johnson. Instead of relying on formative measurement, the only way to make a significant and meaningful contribution is to use reflective measure. Thus, the analysis could factor in the intangible and would solve many problems that other posters have raised with these analytical efforts.



If you would ask for help from the readers of your blog who know more about a topic than you do _before_ you published articles in The New Yorker, you wouldn't have to later do all this embarrassing backtracking here on your blog. And the people who pay a lot of money to subscribe to The New Yorker would be getting the straight scoop from you, not a misleading, low-quality effort like your Wages of Wins review.


I believe that this emerging field of placing statistical analysis into areas that were previously seen as off limits is beneficial both intellectually as well as for its targeted arena, baseball for example. However, what may be equally important is to recognize where these metrics falls short- recognition of these failures is what seperates scientific method from opportunitic academics.


It may be true that WinScore fails to be completely accurate, but it sounds like an improvement over existing systems.

It reminds me of ABC (activity based accounting). This was a cost accounting system created to replace the old ways of internal accounting for profit and loss. The old systems misallocated overhead in such a way that companies, attempting to improve their efficiency or profitability, could end up bankrupting themselves.

This was because the costs of an actual good or service sold was being grossly miscalculated. The problem is that there is no way anyone can say exactly how much it costs to produce a product if a company produces more than one type. Allocating overhead costs and fixed costs is tricky. But the ABC system, while not achieving perfection in cost accounting, greatly improved it--and made it possible for a company to really understand its actual profits (or losses) on every product sold.

WinScore likewise is not perfect, but if it performs as its authers claim, it's a great improvement over the tools currently used.


I'll have to read the book so I can find out what's really going on with the calculation. But it seems that the economists are trying to use a linear model for an inherently non-linear system: you have a Win Score, which is a linear combination of a player's stats, then you have the team's wins which is a linear combination of all the players' Win Scores.

There's no way that a linear model can account for the influence of players on one another. An implicit assumption is that each player's contribution would be the same no matter who his team-mates are; that comes from adding up each player's Win Scores individually.

I don't buy that. I think a team sport has inherently nonlinear, or synergistic, as would be said in the business world. That is, the contribution of a team of players is not necessarily the sum of the contributions of each player individually. You can compute all the correlation coefficients and regression terms you want, but a linear model can only describe a non-linear system over a limited range, if at all.

They've chosen 9 terms (points, rebounds, turnovers, etc.), and by fiddling with the coefficients that are used to combine these I'm sure they can get one season's worth of player stats to add up to the wins of their respective teams. Can those same coefficients account for all seasons?

chris m

Gladwell writes:

They are making a more sophisticated—and limited—claim: for those aspects of basketball performance that are quantifiable (steals, turnovers, rebounds, shots made and missed, free throws etc) are the existing statistical measures we use to rate players any good?

I don't know what the book says on this subject, but Gladwell doesn't focus on why this system is any better than any of the numerous other statistical systems in existence.

Some of which, including Win Score, are reviewed here:


Kevin Pelton

"Two, while the NBA has become a hot sampling domain, the Wall Street Journal recently published an article about the so-called Moneyball GMs in the NBA. In short, these GM fare worse than their cohorts in MLB ..."

I'm sorry, Michael, but this simply isn't accurate. There are no true "Moneyball" GMs in the NBA, though there are certainly various front offices which make more or less use of statistical analysis than their colleagues. Daryl Morey will be the first GM with the background "Moneyball" implies, as loaded of a term as that is, when he takes over the Rockets.

The Journal's piece focused on teams including Morey's former Celtics franchise that employ statistical analysts, but conveniently ignored that Dallas, Phoenix and San Antonio are three of the most successful and stat-aware franchises in the NBA.

As for your explanation of the shortcomings of NBA analysis ... have looked into plus-minus analysis?

Matt G

As thm noted, it's no great accomplishment to fiddle with coefficients until a season of stats match up with that season's wins. Furthermore, Blar and chris m have pointed out that this method may be allocating a team's wins to the wrong players. What would be much more impressive would be to see if WinScores can predict next season's wins totals, particularly for teams who have made several trades and now have new players playing together. Sure, injuries and aging and off-season player improvement will make it difficult to predict next season based on past statistics, but predictive ability is a huge part of the value of these statistical techniques. Like any theory, this should be evaluated on how well it can predict what will happen, not how much we like it.


Matt and thm,

I don't think that data-fitting is much of a problem here. They ran the model for ten years and got an average error of 2.3 wins per team per season, which is not all that much more than the 1.67 that they got when they just did one season: http://dberri.wordpress.com/2006/05/28/malcolm-gladwell-speaks-again/

With one season, their model explained 95% of the variance in the thirty teams' wins with just a handful of variables, and you can't do that just by fiddling with coefficients.

The comments to this entry are closed.

My Photo


  • I'm a writer for the New Yorker magazine, and the author of four books, "The Tipping Point: How Little Things Make a Big Difference", "Blink: The Power of Thinking Without Thinking" and "Outliers: The Story of Success." My latest book, "What the Dog Saw" is a compilation of stories published in The New Yorker. I was born in England, and raised in southwestern Ontario in Canada. Now I live in New York City.

    My great claim to fame is that I'm from the town where they invented the BlackBerry. My family also believes (with some justification) that we are distantly related to Colin Powell. I invite you to look closely at the photograph above and draw your own conclusions.

My Website


  • What the Dog Saw

    buy from amazon


    buy from amazon

    buy from amazon UK


    buy from amazon

    buy from amazon UK

    Tipping Point

    buy from amazon

Recent Articles

Blog powered by Typepad