One of my goals this year is to really enhance my accuracy in projecting players from week to week. It’s not that I think I specifically suck at it now or something, but just that we’re all kind of pretty poor at it as a whole. Pardon my language, but it’s hard as shit to figure out how many yards Le’Veon Bell will run for against a defense that’s ranked in the middle-of-the-pack as a whole, but allowing just 2.5 YPC over the past month, in a game in which the Steelers are three-point underdogs and there’s a 75 percent chance of rain and 15-plus mph winds, for example.

The first thing we need to figure out is if we can even figure it out. That is, should we just play the best players every game, or can we consistently make accurate enough predictions regarding player and team matchups that we should pay a lot of attention to weekly player projections? In short, how much does talent matter, and how much does the matchup matter?

Most fantasy players place way too much stock in results over short periods of time. It’s possible that your stud running back turns in two poor games in a row without him being a horrible running back, human being, and OH MY GOD WHY DID I USE C.J. SPILLER AGAIN?!?!

So in general, people get fooled by short-term results in a major way. But that doesn’t mean you need to, so we want to figure out 1) how much “noise” there is in weekly results and 2) if we can project players accurately enough that we should spend a decent amount of time studying matchups (as opposed to focusing more on offensive talent, scheme, and so on).

The purpose of this article is just to take a very quick look at fantasy stats for quarterbacks and receivers when they’re facing top five defenses. How poorly do they perform? Are the results more extreme than what we’d expect from chance alone? Depending how things shake out, it might give us a better idea as to how much importance we should place on the opponent when deciding who to play from week to week.

 

Fantasy Stats vs. Top 5 Defenses

To determine the effect of facing a stout defense, I looked at quarterback and wide receiver performances since 2012 against defenses finishing in the top five in fantasy points. I charted the probability of the players recording one of their best four or best eight games of the season.

Matchups1

If defenses didn’t matter at all, we’d expect quarterbacks and wide receivers to post top-four performances against them 25 percent of the time and top-eight performances 50 percent of the time. That’s the random expectation.

But we don’t see that. The players underachieved wildly in almost every category. In terms of turning in an elite performance-top four on the year-the No. 1 wide receivers were the worst with just a 15 percent chance.

Meanwhile, take a look at No. 2 receivers. They actually exceeded expectations in the top four category-the only bucket to do so. Why? My guess is that it’s correlated with the WR1 stats. Good defenses adequately take away the offense’s best player. No defense can consistently limit every offensive player’s production, and No. 2 receivers appear to benefit from the focus elsewhere. It’s not like they’re killing it-they produce right around where they would if their performances were evenly distributed, regardless of defensive quality-but the typical No. 2 receiver is hurt far less than quarterbacks or WR1s from facing a top defense.

Another way to look at this is to examine the really poor performances.

Matchups2

Again, we’d expect 25 and 50 percent marks if things production wasn’t at all correlated with defensive strength. But they’re all higher than those benchmarks, suggesting that not only are quarterbacks and receivers unlikely to produce elite numbers against top-five defenses, but they’re also a lot more likely to turn in a true stinker.

The most eye-popping number to me is that 52 percent in the quarterback category, reflecting the fact that quarterbacks have had one of their four worst performances of the season over half the time when facing a top-five defense. That’s over double what we’d expect from chance alone.

When combined with the other graph, the results are clear: quarterbacks suck much more than normal against good defenses. They seem to be the position hurt most from facing a really strong defense, which makes sense; they’re responsible for the entirety of a team’s passing success, so it’s not like an offense can have a poor passing game but they’re still good (which could be the case with an individual receiver).

Wide receivers are worse as a whole, too, with No. 1s suffering more than No. 2s. Why do we see this effect? Sample size. Quarterbacks are the most consistent performers on a week-to-week basis, followed by running backs, with pass-catchers far behind; wide receivers and tight ends are pretty volatile on a week-to-week basis relative to the other positions.

Think about how many opportunities each position receives to make plays per game-maybe 35-40 for passers, 15-25 for running backs, and 5-10 for receivers. It’s not surprising that the larger the sample size of relevant plays, the more consistency.

Since quarterbacks see far more opportunities than No. 1 receivers, and No. 1 receivers usually see more chances than No. 2s, it follows that we’d see the worst games from quarterbacks against top defenses, followed by WR1s, and finally WR2s. In short, quarterbacks are more likely to “get what they deserve” against top defenses since their stats have more chances to regress toward the mean.

The actionable information here? You can and should strongly consider matchups when choosing your quarterbacks and receivers. Due to sample sizes and how pass defense quality affects each position, the matchups matter more for passers. If you’re going to take a chance on a player in a poor matchup, a high-end WR2 might be the direction to go; their play is less strongly correlated with defensive strength.

 

Future Analysis

It’s important to note that I studied weekly production versus defenses as they were ranked at the end of the season. That means we have the benefit of hindsight when analyzing which defenses are truly the best, which we don’t have when the matchups actually occur.

Again, this is yet another reason why we should be placing more and more weight on in-season stats as the year progresses. In the beginning of the season, we still don’t know all that much about team strength and there’s lots of noise; we could see a really good defense perform poorly because they were missing an important nose tackle or because they simply faced a bunch of elite quarterbacks to start the year, for example.

By midseason, we can start to gather more and more useful data such that the matchup analysis matters more. Whereas early-season strategy centers more around identifying talented players (with matchups counting less), mid and late-season strategy is more value-based and opponent-dependent; we know which defenses are actually good, and as the numbers suggest, those matchups can matter quite a bit.