Jump to content

FAQ and Forum on Advanced Stats


witesoxfan

Recommended Posts

QUOTE (LittleHurt05 @ Jan 16, 2014 -> 10:49 AM)
So a catcher essentially has a 2.5 WAR head start over a 1st baseman? (Assuming 162 games, which obviously catchers dont play)

 

That's true in a sense, but that's the wrong way to look at it. The positional adjustments are a way to create a common denominator so that you can have a quantitative way to judge player value across separate roles. The idea is that when a team makes an acquisition, it is replacing an incumbent player, whether that's a current major leaguer or a "replacement" level player available in AAA. X number of offensive value added needs to be compared to the alternative -- since C is a much more demanding defensive position than 1B, the pool of players that can play C effectively is smaller. Since the pool of players is smaller, average offensive performance is lower. Therefore, a given stat line is more valuable coming from C as opposed to 1B because the difference between that level of production and what else is available is much larger.

 

If you could have a player that hit .300/.400/.500, but had to choose if he was a C or a 1B, you would choose C because you will presumably have to choose a lesser player to fill the other position, and a replacement level 1B hits much better than a replacement level C. So your overall production is higher with the good hitting C, thus the positional adjustment in value.

Edited by Eminor3rd
Link to comment
Share on other sites

  • Replies 240
  • Created
  • Last Reply

Top Posters In This Topic

Another way to think about bWAR vs. fWAR:

 

bWAR prioritizes completeness

fWAR prioritizes accuracy

 

fWAR includes only components that "we" are most confident are both precise and accurate. If there's a subject with which no one has been able to find strong correlations to suggest properly isolated variables, fWAR just leaves it out (for example, the effect of defense in pitching performance). In this way, fWAR is saying, "there are some important things going on that we cannot include in this number, but we have most of it in this number, and we are extremely confident that everything we are including is right on the money."

 

bWAR operates under the assumption that if WAR doesn't include every measurable thing on the field, it isn't a useful statistic. So it gives pitchers credit for ERA rather than DIPS numbers, for example. In doing so, it comes up with a value that includes every possible thing a player adds, but it also includes a lot of noise in the calculations, so it's more realistic to expect the bWAR number to assign positive or negative credit improperly. For example, a pitcher may get more bWAR that really should be considered benefit from playing in front of an excellent defense or benefiting from a lot of batted ball luck. This can be problematic because players are more likely to have outlier seasons that don't predict future performance.

 

The difference between these metrics are most pronounced on the pitching side, as isolating and measuring defense remains a much more difficult problem than with offense.

 

The truth is, of course, somewhere in the middle. But the midpoint is different for different types of players. Currently, fWAR and bWAR represent the best we can do while erring on either side of the priorities listed above.

Edited by Eminor3rd
Link to comment
Share on other sites

I would say that the difference between fWAR and bWAR is that bWAR tries to measure on-field results while fWAR tries to measure onfield talent.

 

As usual, pitchers are the best way to look at this. fWAR rewards a pitcher for doing "pitching things" well -- striking people out, not walking people -- while bWAR rewards a pitcher for earned run prevention, which can be strongly influenced by luck, defensive talent, ballpark, etc.

 

Throughout each measure, you can see how fWAR is about trying to find what a player added in a vacuum; that is, what would have happened if you took that player's efforts and put them in a different context. There's good conversation to this effect in the UZR primer. Does a player's batting average really tell you what happened? Is it a very useful representation of what he contributed to his team? Or is there a better way to do it, with less doubt about luck and the value of one event versus another? IMO, bWAR is not quite like batting average in its archaic-ness but it certainly is not worried about influences on production that are outside of the player's efforts, like luck.

 

 

Link to comment
Share on other sites

QUOTE (robinventura23 @ Jan 16, 2014 -> 04:58 PM)
How does one calculate WAR? I'm sure it's not as simple as calculating .AVG or OBP. Is there a formula one uses?

 

1. Assign linear weight values to events. These are calculated by breaking down each event into the average amount of runs that it leads to, based on base/out states. There are measured in runs: http://www.fangraphs.com/guts.aspx?type=cn

 

2. Add up all the "runs created" and "runs prevented."

 

3. Add or subtract runs based on positional adjustment: http://www.fangraphs.com/library/misc/war/...nal-adjustment/

 

4. Assign one "win" per every ten runs, based on early pythagorean research that shows a strong correlation between team wins and every ten runs a team scores more than it allows.

 

5. Deduct value for replacement level: http://www.fangraphs.com/library/misc/war/replacement-level/

Link to comment
Share on other sites

One thing that comes up on a regular basis is "this guy has to produce x to be worth $y." While this is technically true, I don't believe we should look at it like that. What you want to see is a x as low as possible with y as high as possible. Assigning actual values to WAR like that is, to me, inherently wrong. Yes, $6-7 mil is the going rate for 1 WAR on the open market, but if you sign someone to that, you should try and get better value because there are teams paying guys $15 million who are doing absolutely nothing because it's a bad contract. Those numbers get so high not because every team is paying $7 million for each individual win but because there are bad and good contracts out there cancelling each other out and tugging that value either way (which is usually up).

 

If you are the GM, you want to bring in guys that are good and will help you win. The Rangers are probably going to regret the Choo signing in 5 years, but in the meantime, he is going to help them out tremendously and could even help them win a World Series.

 

So, when talking about Tanaka making $20 million, or $120 million over the duration, yes he will technically need to accrue about 20 WAR over the duration of that contract (3.33 WAR per year), but you don't sign him to be a #3-4 starter. You sign him to be a 5-6 WAR starter, an ace.

 

Link to comment
Share on other sites

QUOTE (witesoxfan @ Jan 17, 2014 -> 03:25 PM)
One thing that comes up on a regular basis is "this guy has to produce x to be worth $y." While this is technically true, I don't believe we should look at it like that. What you want to see is a x as low as possible with y as high as possible. Assigning actual values to WAR like that is, to me, inherently wrong. Yes, $6-7 mil is the going rate for 1 WAR on the open market, but if you sign someone to that, you should try and get better value because there are teams paying guys $15 million who are doing absolutely nothing because it's a bad contract. Those numbers get so high not because every team is paying $7 million for each individual win but because there are bad and good contracts out there cancelling each other out and tugging that value either way (which is usually up).

 

If you are the GM, you want to bring in guys that are good and will help you win. The Rangers are probably going to regret the Choo signing in 5 years, but in the meantime, he is going to help them out tremendously and could even help them win a World Series.

 

So, when talking about Tanaka making $20 million, or $120 million over the duration, yes he will technically need to accrue about 20 WAR over the duration of that contract (3.33 WAR per year), but you don't sign him to be a #3-4 starter. You sign him to be a 5-6 WAR starter, an ace.

 

Whenever I think of market-rate WAR dollars, I'm using it as a watermark for downside. However, that only really makes sense in a vacuum. Depending on each team's place on the win curve, it could make sense for them to purchase "additional wins" at market rate or even substantially above market rate to push them over the edge.

Link to comment
Share on other sites

QUOTE (Eminor3rd @ Jan 17, 2014 -> 03:48 PM)
Whenever I think of market-rate WAR dollars, I'm using it as a watermark for downside. However, that only really makes sense in a vacuum. Depending on each team's place on the win curve, it could make sense for them to purchase "additional wins" at market rate or even substantially above market rate to push them over the edge.

 

Exactly. This is why guys like Granderson and Ethier make no sense for the White Sox. Even if they are 3 win players and you are only paying them $8 mill a year (they obviously cost more than that), that is only an improvement from context-neutral 75 wins to 78 wins. What the hell good does that do? That's why I really didn't like that move for the Mets.

 

However, if you are an 86 win team, and you can bring in a 2 WAR player who will be replacing a -1 WAR player, that's essentially 3 additional wins, and it pushes you to 89 wins, which is almost always a playoff birth.

 

You also see teams taking a discount and paying for production upfront in exchange for an inflated, backloaded contract and paying extra for a worse product. Then they rinse, lather, and repeat.

 

The OTHER thing about this is actually the financial situation, which is a topic brought up by Baseball Between the Numbers (which, while dated, is still an excellent read). It basically states that the difference in monetary value between wins 76 and 80 is next to nothing, but the distance between 86 and 88 is very large. So, if you are close, it makes sense to buy, buy, buy. Unfortunately, teams so often overestimate themselves and you see situations like the 2007-08 Mariners. You must be patient and know for certain what your talent level is. The Pirates, more than anybody else in baseball, have figured this out and have been overly conservative, and it's paying dividends for them now.

Link to comment
Share on other sites

QUOTE (witesoxfan @ Jan 17, 2014 -> 04:01 PM)
Exactly. This is why guys like Granderson and Ethier make no sense for the White Sox. Even if they are 3 win players and you are only paying them $8 mill a year (they obviously cost more than that), that is only an improvement from context-neutral 75 wins to 78 wins. What the hell good does that do? That's why I really didn't like that move for the Mets.

 

However, if you are an 86 win team, and you can bring in a 2 WAR player who will be replacing a -1 WAR player, that's essentially 3 additional wins, and it pushes you to 89 wins, which is almost always a playoff birth.

 

You also see teams taking a discount and paying for production upfront in exchange for an inflated, backloaded contract and paying extra for a worse product. Then they rinse, lather, and repeat.

 

The OTHER thing about this is actually the financial situation, which is a topic brought up by Baseball Between the Numbers (which, while dated, is still an excellent read). It basically states that the difference in monetary value between wins 76 and 80 is next to nothing, but the distance between 86 and 88 is very large. So, if you are close, it makes sense to buy, buy, buy. Unfortunately, teams so often overestimate themselves and you see situations like the 2007-08 Mariners. You must be patient and know for certain what your talent level is. The Pirates, more than anybody else in baseball, have figured this out and have been overly conservative, and it's paying dividends for them now.

 

I'll just second that Baseball Between the Numbers is awesome -- one of my favorite reads of all-time. I recommend it to anyone who is interested in learning more about sabermetrics if only for the incredibly complete primer on the use of linear weights to measure run values, which is an incredibly important concept for today's stats.

Link to comment
Share on other sites

Sometimes you sign a guy not for it to be a good deal, but because you want him on your team. Imagine if Mike Trout became a free agent -- you wouldn't give him $40M a year because he'll necessarily be "worth" it, but because you're better off with him for $40 than nothing for $0. Some guys in some situations are just priceless

Link to comment
Share on other sites

QUOTE (Eminor3rd @ Jan 17, 2014 -> 04:26 PM)
I'll just second that Baseball Between the Numbers is awesome -- one of my favorite reads of all-time. I recommend it to anyone who is interested in learning more about sabermetrics if only for the incredibly complete primer on the use of linear weights to measure run values, which is an incredibly important concept for today's stats.

 

More than any other resource, that helped me really understand the mechanisms of a sabermetrician's mind.

 

http://books.google.com/books/about/Baseba...id=XvD_jwEACAAJ

 

Strongly, strongly recommended.

Link to comment
Share on other sites

QUOTE (Jake @ Jan 17, 2014 -> 04:29 PM)
Sometimes you sign a guy not for it to be a good deal, but because you want him on your team. Imagine if Mike Trout became a free agent -- you wouldn't give him $40M a year because he'll necessarily be "worth" it, but because you're better off with him for $40 than nothing for $0. Some guys in some situations are just priceless

 

This was Williams' MO while he was GM. He didn't necessarily always care about giving up the proper value for an asset, he wanted the asset because he believed that would put his teams over the top and he'd overpay to get it. That was seemingly what he did with Garcia, Vazquez, and Thome, and several others.

Link to comment
Share on other sites

  • 5 weeks later...

An important topic of discussion that I don't think was brought up here as much as it should have was regression. I briefly mentioned the concept of regression earlier in the thread, but ultimately what does it mean?

 

I think people get confused about the term regression because they assume it means "getting worse." That's not the case at all. Players regress toward their expected means all the time - Adam Dunn is a perfect example of that over the past 2 years. In 2011, he had one of the worst seasons of all time but people didn't worry because they expected him to regress in 2012, and he did just that. His overall numbers were a little worse last year, but I still expect him to be around a .775-800 OPS overall, and perhaps better than that if he is used exclusively against right handed pitching.

 

What always confused me when I was initially learning about sabermetrics was the concept of this. Jake Drake is a .320/.380/.520 hitter in a neutral playing field, which is an incredibly good hitter, better than a 150+ wRC+. However, after May 31st, Jake is hitting only .280/.340/.460, still good overall but maybe closer to a 120-130 wRC+ hitter, a good player but not nearly as good. The question remains: what should we expect from him the rest of the way, for him to work towards his overall career averages of .320/.380/.520 or to hit closer towards his career averages of .320/.380/.520, thus perhaps nearing him towards some kind of middle ground (say a final line of .300/.360/.490)?

 

And simply, the answer is: both. The point of regression is that there is some central production patterns that you've put up over the course of your career that we should expect out of you. At the same time, because of this production pattern Drake has established, we should also expect that he work back towards that overall line, so seeing him hit .340/.400/.550 the rest of the way will not be surprising either. The only thing theory dictates that we should not expect is for him to hit worse.

 

Now, given other circumstances and the volatility of human nature, it's also perfectly reasonable for him to underperform. That .280/.340/.460 may ultimately represent his final line for the season. Depending upon other factors - age, development of bad habits, bad luck on the field, or anything else you can think of - it may be possible that we have a new expected overall talent for Jake Drake, or it may be that we expect him to hit closer to his previous career averages of .320/.380/.520, or perhaps it's somewhere in between at this point in time. The only thing that regression suggests is getting worse over time is the overall line, because conventional wisdom dictates that talent grows lesser over time.

 

I want to see what you have to add to it, but the ultimate reason I wanted to go over this was to show the importance of what I'd like to talk about next, and that's projection based systems, how they come up with their numbers, and why they are NOT meaningless.

Link to comment
Share on other sites

QUOTE (witesoxfan @ Feb 17, 2014 -> 04:13 PM)
An important topic of discussion that I don't think was brought up here as much as it should have was regression. I briefly mentioned the concept of regression earlier in the thread, but ultimately what does it mean?

 

I think people get confused about the term regression because they assume it means "getting worse." That's not the case at all. Players regress toward their expected means all the time - Adam Dunn is a perfect example of that over the past 2 years. In 2011, he had one of the worst seasons of all time but people didn't worry because they expected him to regress in 2012, and he did just that. His overall numbers were a little worse last year, but I still expect him to be around a .775-800 OPS overall, and perhaps better than that if he is used exclusively against right handed pitching.

 

What always confused me when I was initially learning about sabermetrics was the concept of this. Jake Drake is a .320/.380/.520 hitter in a neutral playing field, which is an incredibly good hitter, better than a 150+ wRC+. However, after May 31st, Jake is hitting only .280/.340/.460, still good overall but maybe closer to a 120-130 wRC+ hitter, a good player but not nearly as good. The question remains: what should we expect from him the rest of the way, for him to work towards his overall career averages of .320/.380/.520 or to hit closer towards his career averages of .320/.380/.520, thus perhaps nearing him towards some kind of middle ground (say a final line of .300/.360/.490)?

 

And simply, the answer is: both. The point of regression is that there is some central production patterns that you've put up over the course of your career that we should expect out of you. At the same time, because of this production pattern Drake has established, we should also expect that he work back towards that overall line, so seeing him hit .340/.400/.550 the rest of the way will not be surprising either. The only thing theory dictates that we should not expect is for him to hit worse.

 

Now, given other circumstances and the volatility of human nature, it's also perfectly reasonable for him to underperform. That .280/.340/.460 may ultimately represent his final line for the season. Depending upon other factors - age, development of bad habits, bad luck on the field, or anything else you can think of - it may be possible that we have a new expected overall talent for Jake Drake, or it may be that we expect him to hit closer to his previous career averages of .320/.380/.520, or perhaps it's somewhere in between at this point in time. The only thing that regression suggests is getting worse over time is the overall line, because conventional wisdom dictates that talent grows lesser over time.

 

I want to see what you have to add to it, but the ultimate reason I wanted to go over this was to show the importance of what I'd like to talk about next, and that's projection based systems, how they come up with their numbers, and why they are NOT meaningless.

 

This is a good topic, because getting to the bottom of it underscores the effect of a strong or weak "start" to a season, which is something that we always end up discussing at some point in May.

 

There are two components of this:

 

1. Circumstance. How does cold weather/death of a family member/fatigue/etc. modify the expected performance

2. Mathematical regression

 

As far as I can tell, number 1 is a very real but very unmeasurable factor. We will just have to live with guessing it.

 

The second point is, I think, what you're getting at. The fallacy of this concept is that people expect extremes to balance one another. So, the answer to your initial question about Jake Drake is the latter, that he'll end up somewhere in between. The simple way to look at it is this: If we know that a player's true talent is x, then the most likely output to expect at any given time is x.

 

Jake is a true talent 125 wRC+ hitter. He starts cold though, and is only putting out 100 wRC+ throughout the first couple months. While it is definitely possible that he will swing back and hit at 150 wRC+ for a while to match his true talent, it is not the most likely outcome. The most likely outcome is that he regresses to his normal 125 wRC+ self and ends the season at 115 wRC+ or whatever. This is the value of stuff being "in the bank."

 

You can apply the same principles to team wins. What is the true cost of a slow start? Team X is a true-talent 90 win team, and the look like they will need about 90 wins to get to the postseason. Team X starts slow though, going 10-20 over their first 30 games. From then on, it is reasonable to expect the 90 win team to regress to its true talent, which is a .555 winning percentage (90-72). Apply that percentage to the remaining 132 games, and you end up with an 83 win team instead. So the cost of that hole that Team X dug for themselves in April is that they now need to play 7 games BETTER than their true talent to reach their playoff goal of 90 wins.

 

The fun in being a fan is, of course, hoping that your team gets hot and does something unlikely, defying the odds in your favor. This is always within the realm of possibility, but it's much less likely than it seems. Which is why we end up missing the playoffs more often than not, even when the team seems "better on paper" than it has played to a given point in the season.

Link to comment
Share on other sites

QUOTE (Eminor3rd @ Feb 17, 2014 -> 07:33 PM)
No one has any more questions, wite. Let's argue about something!
Oh I have many questions but I'm down to my last baseball message board and would like to stay on this one. I have a long love affair with data going back to early childhood. I've read Bill James since before the Total Baseball Days. I have so many problems with sabermetrics I'd hardly know where to begin. You know if you are good at arithmetic it is relatively easy to compute BA, FA, ERA, winning Pct. etc and etc. You can argue what it means or how valid each stat is but there is no alchemic formula to it. Not so with WAR and all the variations that are constantly readjusted according to I'm just not sure what. It's not going to go away. I expect that someday in the maybe not too distant future baseball awards will be given on the basis of the latest computation of WAR. I wouldn't even be surprised if the standings are adjusted to be in perfect harmony with Pythagorean wins. The Indians are 2005 world champs. It's not that I'm a hidebound old fart who resists every change in life. I am alive to today because of a surgery first initiated in the 1970s and perfected when I needed it 14 years ago this month. So no, I'm hardly against change and innovation. I even would have voted for King Felix the year he won his Cy Young Award, and probably would have voted for Mike Trout for MVP this past season. I know its not good enough. Like religion you're either a believer or a blashphemer. I was going to leave this alone but I've read and reread the thread and like I said I've actually studied this.
Link to comment
Share on other sites

QUOTE (SI1020 @ Feb 17, 2014 -> 09:43 PM)
Oh I have many questions but I'm down to my last baseball message board and would like to stay on this one. I have a long love affair with data going back to early childhood. I've read Bill James since before the Total Baseball Days. I have so many problems with sabermetrics I'd hardly know where to begin. You know if you are good at arithmetic it is relatively easy to compute BA, FA, ERA, winning Pct. etc and etc. You can argue what it means or how valid each stat is but there is no alchemic formula to it. Not so with WAR and all the variations that are constantly readjusted according to I'm just not sure what. It's not going to go away. I expect that someday in the maybe not too distant future baseball awards will be given on the basis of the latest computation of WAR. I wouldn't even be surprised if the standings are adjusted to be in perfect harmony with Pythagorean wins. The Indians are 2005 world champs. It's not that I'm a hidebound old fart who resists every change in life. I am alive to today because of a surgery first initiated in the 1970s and perfected when I needed it 14 years ago this month. So no, I'm hardly against change and innovation. I even would have voted for King Felix the year he won his Cy Young Award, and probably would have voted for Mike Trout for MVP this past season. I know its not good enough. Like religion you're either a believer or a blashphemer. I was going to leave this alone but I've read and reread the thread and like I said I've actually studied this.

 

Welcome to the discussion! I have no idea what your stance is.

Link to comment
Share on other sites

QUOTE (Eminor3rd @ Feb 17, 2014 -> 03:50 PM)
This is a good topic, because getting to the bottom of it underscores the effect of a strong or weak "start" to a season, which is something that we always end up discussing at some point in May.

 

There are two components of this:

 

1. Circumstance. How does cold weather/death of a family member/fatigue/etc. modify the expected performance

2. Mathematical regression

 

As far as I can tell, number 1 is a very real but very unmeasurable factor. We will just have to live with guessing it.

 

The second point is, I think, what you're getting at. The fallacy of this concept is that people expect extremes to balance one another. So, the answer to your initial question about Jake Drake is the latter, that he'll end up somewhere in between. The simple way to look at it is this: If we know that a player's true talent is x, then the most likely output to expect at any given time is x.

 

Jake is a true talent 125 wRC+ hitter. He starts cold though, and is only putting out 100 wRC+ throughout the first couple months. While it is definitely possible that he will swing back and hit at 150 wRC+ for a while to match his true talent, it is not the most likely outcome. The most likely outcome is that he regresses to his normal 125 wRC+ self and ends the season at 115 wRC+ or whatever. This is the value of stuff being "in the bank."

 

You can apply the same principles to team wins. What is the true cost of a slow start? Team X is a true-talent 90 win team, and the look like they will need about 90 wins to get to the postseason. Team X starts slow though, going 10-20 over their first 30 games. From then on, it is reasonable to expect the 90 win team to regress to its true talent, which is a .555 winning percentage (90-72). Apply that percentage to the remaining 132 games, and you end up with an 83 win team instead. So the cost of that hole that Team X dug for themselves in April is that they now need to play 7 games BETTER than their true talent to reach their playoff goal of 90 wins.

 

The fun in being a fan is, of course, hoping that your team gets hot and does something unlikely, defying the odds in your favor. This is always within the realm of possibility, but it's much less likely than it seems. Which is why we end up missing the playoffs more often than not, even when the team seems "better on paper" than it has played to a given point in the season.

 

I can't disagree with any of this, and at some point, sample sizes make the idea of making up for what is essentially lost numbers either improbable or impossible. What I am kind of getting at is that, in the above scenario with Jake Drake, there are only 3 options:

 

1) Revert towards career norms for the rest of the season

2) Hit above so that the end result is similar to his career norms

3) Remain below career norms

 

Dependent upon career tendencies and potential talent shifts (previous injuries, widened strike zone, better information on the player - again, pretty much anything can affect the talent of a player), #1 is to be expected, but, personally, I would say #2 is likelier to happen before #3. Again, this is all dependent upon any number of factors.

 

If there's anything you want to add, I will let you, otherwise I'll hit on something this afternoon that I wanted to bring up following this discussion that people view here as meaningless.

 

QUOTE (SI1020 @ Feb 17, 2014 -> 08:43 PM)
Oh I have many questions but I'm down to my last baseball message board and would like to stay on this one. I have a long love affair with data going back to early childhood. I've read Bill James since before the Total Baseball Days. I have so many problems with sabermetrics I'd hardly know where to begin. You know if you are good at arithmetic it is relatively easy to compute BA, FA, ERA, winning Pct. etc and etc. You can argue what it means or how valid each stat is but there is no alchemic formula to it. Not so with WAR and all the variations that are constantly readjusted according to I'm just not sure what. It's not going to go away. I expect that someday in the maybe not too distant future baseball awards will be given on the basis of the latest computation of WAR. I wouldn't even be surprised if the standings are adjusted to be in perfect harmony with Pythagorean wins. The Indians are 2005 world champs. It's not that I'm a hidebound old fart who resists every change in life. I am alive to today because of a surgery first initiated in the 1970s and perfected when I needed it 14 years ago this month. So no, I'm hardly against change and innovation. I even would have voted for King Felix the year he won his Cy Young Award, and probably would have voted for Mike Trout for MVP this past season. I know its not good enough. Like religion you're either a believer or a blashphemer. I was going to leave this alone but I've read and reread the thread and like I said I've actually studied this.

 

The idea of sabermetrics is not to create an end all, be all. It's primarily there to help us better understand the game and what makes a player good. The concept of WAR is not new or all that crazy. It's merely trying to put a numeric value on the contributions of a player compared to his peers. We've been doing that for 150 years, but we now have more complete information about what makes a player good compared to what we had in 1930 or 1960 or 1990. The data is incomplete and it's still flawed, as there are conflicting opinions on how to value both pitching contributions - baseball-reference uses what a player does on the field, which includes both luck and fielding contributions, which may ultimately be out of the pitcher's control, while FanGraphs uses fielding independent statistics, which does not account for all of the runs a pitcher gives up but just the runs a pitcher should have given up in a neutral context - as well as fielding contributions, which are flawed to begin with because it takes 3 years to establish a significant sample size, and by that time the player's fielding talent has likely changed. Neither bWAR or fWAR is wrong to use, but "junkies" will typically relate to fWAR on a much more consistent basis because it better represents the talent of a player and is a better predictor of production moving forward.

 

At the end of the day, games will not be decided by Pyth W-L. It will ALWAYS be about the number of games you win on the field. Games aren't played on paper, but the inferences we can make from the information provided by the play on the field will help us to better understand both the value and significance of plays on the field. I look back to a game back in 2010 between the Padres and the Cubs in which Chris Young was starting. He was credited with the winning percentage added (wPA) of two plays that, in the box score, look very unimportant but which in reality were incredible plays made by Will Venable to save at least one home run and maybe two. Here is the link to the article. Shouldn't Venable be credited with plays towards the win there? And if a player makes an error in a crucial spot, shouldn't that be deducted from their total winning percentage added? It's something that, like I said, I'm sure they either have and/or continue to work towards implementing, but it takes time to determine this information. Just as the rules of sports are constantly evolving to adapt to current societal standards, so are the numbers we use to interpret the game we love.

Link to comment
Share on other sites

How is a replacement player calculated? This is the biggest mystery to me in the whole WAR discussion. Assuming a replacement player is a AAAA guy, what is the baseline used to determine what that buys statistical profile should be?

 

If you have a team full of replacement level players, are you predicted to be a .500 or middle of the road team? Is the replacement level figure adjusted daily, weekly, monthly, annually?

 

Does WAR take into account you are facing Justin Verlander or Joe Saunders when it determines how you fair in comparison to other players? Does it take environmental or park factors into account? How about strength of league or strength of schedule? If a Cubs player has a 2.2 WAR at SS, does that mean he is viewed as an equal player to a White Sox with 2.2 WAR? If the same two players had identical stat lines, would their WAR be the same?

 

Negative WAR is weird concept to me too, you are basically saying that Adam Dunn is worse than Ross Gload, because Gload is theoretically a replacement level player, but that just isn't true.

Link to comment
Share on other sites

QUOTE (IowaSoxFan @ Feb 20, 2014 -> 01:40 PM)
How is a replacement player calculated? This is the biggest mystery to me in the whole WAR discussion. Assuming a replacement player is a AAAA guy, what is the baseline used to determine what that buys statistical profile should be?

 

If you have a team full of replacement level players, are you predicted to be a .500 or middle of the road team? Is the replacement level figure adjusted daily, weekly, monthly, annually?

 

Does WAR take into account you are facing Justin Verlander or Joe Saunders when it determines how you fair in comparison to other players? Does it take environmental or park factors into account? How about strength of league or strength of schedule? If a Cubs player has a 2.2 WAR at SS, does that mean he is viewed as an equal player to a White Sox with 2.2 WAR? If the same two players had identical stat lines, would their WAR be the same?

 

Negative WAR is weird concept to me too, you are basically saying that Adam Dunn is worse than Ross Gload, because Gload is theoretically a replacement level player, but that just isn't true.

 

(as a forward, these are awesome questions and should trigger good amounts of debate)

 

There is a standard calculation for a replacement level player, but to think broadly of it, consider any player that is removed from the 25 man roster over the course of the year, given a $400,000 cost (meaning to consider, say, Jeff Keppinger, you are to assume he is making $400,000, not $4,000,000). Then imagine that you DFA that player and he has to go through the waiver process. If he clears that, meaning nobody wants to take him even though he is making merely the league minimum, then he is a replacement level player. Dylan Axelrod is a perfect example of a replacement level player.

 

Here is a link that goes into further detail regarding that which explains it a bit better: http://www.fangraphs.com/library/misc/war/replacement-level/

 

As indicated at the bottom of that article, a team full of replacement level players would be expected to win about 48 games. Depending on luck or how much better or worse these players considered to be replacement level actually are compared to the replacement level, that team could win as few was 42 games or as many as 70. In the past 10 years, the Mariners and Astros have each had pretty bad teams overall win anywhere between 85 and 88 games, and I'm sure there are other examples of that as well. However, it's best to think of a replacement player as so bad that no team wants him at any cost.

 

All of these numbers are context neutral, meaning they factor in everything, though I do not believe they would factor how often you are facing someone really good (as far as I'm aware, 0 for 4 against Justin Verlander appears the same in WAR as 0 for 4 against Dylan Axelrod).

 

Negative WAR implies that you are worse than the calculated replacement level over whatever length of time. It doesn't necessarily mean that you are a bad or a good player, just that you have played poorly. Consider Dunn's 2011 - you can reasonably say that almost any player in AAA would be able to play better defense than him at 1B while also hitting better too. But, given using Andy Wilkins or Adam Dunn on your roster, you are going to take Dunn every time (except for one poster on here in particular). Adam Dunn is not a replacement level player because his expected contributions are positive, but he did have a very, very bad year in 2011.

Link to comment
Share on other sites

QUOTE (witesoxfan @ Feb 20, 2014 -> 02:17 PM)
(as a forward, these are awesome questions and should trigger good amounts of debate)

 

There is a standard calculation for a replacement level player, but to think broadly of it, consider any player that is removed from the 25 man roster over the course of the year, given a $400,000 cost (meaning to consider, say, Jeff Keppinger, you are to assume he is making $400,000, not $4,000,000). Then imagine that you DFA that player and he has to go through the waiver process. If he clears that, meaning nobody wants to take him even though he is making merely the league minimum, then he is a replacement level player. Dylan Axelrod is a perfect example of a replacement level player.

 

Here is a link that goes into further detail regarding that which explains it a bit better: http://www.fangraphs.com/library/misc/war/replacement-level/

 

As indicated at the bottom of that article, a team full of replacement level players would be expected to win about 48 games. Depending on luck or how much better or worse these players considered to be replacement level actually are compared to the replacement level, that team could win as few was 42 games or as many as 70. In the past 10 years, the Mariners and Astros have each had pretty bad teams overall win anywhere between 85 and 88 games, and I'm sure there are other examples of that as well. However, it's best to think of a replacement player as so bad that no team wants him at any cost.

 

All of these numbers are context neutral, meaning they factor in everything, though I do not believe they would factor how often you are facing someone really good (as far as I'm aware, 0 for 4 against Justin Verlander appears the same in WAR as 0 for 4 against Dylan Axelrod).

 

Negative WAR implies that you are worse than the calculated replacement level over whatever length of time. It doesn't necessarily mean that you are a bad or a good player, just that you have played poorly. Consider Dunn's 2011 - you can reasonably say that almost any player in AAA would be able to play better defense than him at 1B while also hitting better too. But, given using Andy Wilkins or Adam Dunn on your roster, you are going to take Dunn every time (except for one poster on here in particular). Adam Dunn is not a replacement level player because his expected contributions are positive, but he did have a very, very bad year in 2011.

 

 

From that article, I agree that their level of replacement players is too high, because there are not players that perform on the level that they indicate readily available. It would be nice to see a sample of what a roster of replacement players would statistically look like, understanding the difficulty as some players defense pushes their bat and vice versa, but a median range of performance of what a replacement level player would be expected to produce would help to analyze actual players against it. I understand they have offensive and defensive WAR, but it is inadequate to me if I can not tell in what areas the player is either offensively proficient or deficient.

 

I for one am not a big fan of WAR, and it really comes from my belief that there is no such thing as a replacement level player. I would rather see a +/- of league median statistics.

Link to comment
Share on other sites

QUOTE (IowaSoxFan @ Feb 20, 2014 -> 03:40 PM)
From that article, I agree that their level of replacement players is too high, because there are not players that perform on the level that they indicate readily available. It would be nice to see a sample of what a roster of replacement players would statistically look like, understanding the difficulty as some players defense pushes their bat and vice versa, but a median range of performance of what a replacement level player would be expected to produce would help to analyze actual players against it. I understand they have offensive and defensive WAR, but it is inadequate to me if I can not tell in what areas the player is either offensively proficient or deficient.

 

I for one am not a big fan of WAR, and it really comes from my belief that there is no such thing as a replacement level player. I would rather see a +/- of league median statistics.

 

I don't know enough about the mechanics of the replacement calculation to argue for or against it, but to me it doesn't matter too much. What's important is that there is an established denominator of SOME kind. The best part of WAR to me is being able to compare players against the same baseline, whatever that baseline happens to be.

Edited by Eminor3rd
Link to comment
Share on other sites

QUOTE (Eminor3rd @ Feb 20, 2014 -> 03:59 PM)
I don't know enough about the mechanics of the replacement calculation to argue for or against it, but to me it doesn't matter too much. What's important is that there is an established denominator of SOME kind. The best part of WAR to me is being able to compare players against the same baseline, whatever that baseline happens to be.

 

A big issue I have is that baseball is so situational. Its very rare to have to players experience the exact same at bat. There are runners on base or not, sun, lights, wind, playing in a band box or a canyon, facing a pitcher with first base open, being up or down a run, seeing a pitcher for the first time or getting to see him for a third time. You tripped coming out of the dugout, made a bad play in the OF, made a good play in the OF, drank too many beers last night, the pitcher drank too many beers before the game, . Thats what makes baseball great, is that every at bat has so many factors to calculate in that is impossible to know the outcome.

 

Stats like WAR are a guidepost, I get that, but should not be held as an end all be all of player production. If I am evaluating a player to come to my team, WAR is the last thing I would use as a GM. WAR is more like the preseason top 25 in college football, it is a measuring stick that calculates success in a vacuum but as an evaluation/scouting tool is not very useful.

Link to comment
Share on other sites

QUOTE (IowaSoxFan @ Feb 21, 2014 -> 10:42 AM)
A big issue I have is that baseball is so situational. Its very rare to have to players experience the exact same at bat. There are runners on base or not, sun, lights, wind, playing in a band box or a canyon, facing a pitcher with first base open, being up or down a run, seeing a pitcher for the first time or getting to see him for a third time. You tripped coming out of the dugout, made a bad play in the OF, made a good play in the OF, drank too many beers last night, the pitcher drank too many beers before the game, . Thats what makes baseball great, is that every at bat has so many factors to calculate in that is impossible to know the outcome.

 

Stats like WAR are a guidepost, I get that, but should not be held as an end all be all of player production. If I am evaluating a player to come to my team, WAR is the last thing I would use as a GM. WAR is more like the preseason top 25 in college football, it is a measuring stick that calculates success in a vacuum but as an evaluation/scouting tool is not very useful.

 

I wouldn't compare it to the preseason top 25, but more the top 25s as the season goes along. At the end, you'll likely have a few guys at the top that you can probably agree are the best all around players in the league, but what makes them that way is not told by WAR itself. If I told you Boston College beat Syracuse, the #1 team in the country, it tells you nothing about Syracuse other than the fact that people voting for the polls determined they were the best team in the country. Where they lack is that those are a matter of opinion - if a coach wanted to vote his team as the #1 team in the country, he could, while WAR is unbiased and looks only at the numbers. Still, to actually determine anything from WAR, you look towards your offensive and defensive runs created, and from there you can break it down further.

 

As a scouting tool or predictor for success, WAR fails because that sole number itself tells us only how much value that player has added to his team above that of a scrub. There are numbers that make up that one cumulative number that tell us quite a bit about the player though, so you can make judgments, safe assumptions, and rationalizations for how good or bad players are.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...