Our New Polling Averages Show Biden Leads Trump By 9 Points Nationally (2024)

Today we launched our general election polling averages, nationally and for all states with a sufficient number of polls. Recent polls show former Vice President Joe Biden with a solid lead over President Trump nationally, and in most swing states. Biden currently leads Trump 50.5 percent to 41.3 percent in national polls, according to our average — a 9.2-point lead.1

Biden also leads Trump in swing states such as Michigan, Wisconsin and Arizona — although his lead in many swing states are not as wide as his margin in national polls, suggesting that the Electoral College could once again favor Trump in the event of a close election.

Here’s a table showing the averages in swing states:

Biden leads nationally and in most swing states

FiveThirtyEight polling averages as of 5:30 p.m. on June 17, 2020

StateBidenTrumpBiden Margin
Colorado54.1%37.1%+17.0
Maine*53.239.7+13.5
New Mexico*54.140.8+13.3
Virginia50.539.9+10.6
Michigan50.940.7+10.2
National50.541.3+9.2
Nevada48.240.1+8.0
New Hampshire50.042.3+7.7
Florida49.742.8+6.9
Minnesota*50.844.2+6.6
Wisconsin49.042.4+6.6
Pennsylvania49.043.7+5.3
Arizona47.743.6+4.0
North Carolina47.644.6+3.0
Ohio48.445.7+2.7
Georgia47.046.1+1.0
Iowa45.646.2-0.6
Texas46.547.2-0.7

Overall — assuming that states that haven’t been polled go the same way as they did in 2016 — Biden leads in states worth 368 electoral votes, while Trump leads in states totalling 170 electoral votes.2

But a potential problem for Biden is that Trump could have an Electoral College advantage if the election tightens. Biden currently leads Trump by “only” 6.6 points in the current tipping-point state, Minnesota, but this is narrower than Biden’s 9.2-point lead in the national polls. So while a Biden landslide is possible if he wins all these swing states, so is a Trump Electoral College victory, depending on which way the race moves between now and November.

The rest of this article covers how our polling averages work in a fair amount of detail. We know that a lot of you will probably take the off-ramp here and not read through the methodology. However, we’d encourage you to at least read the next section, which distinguishes between our polling averages (what we’ve just released) and our election forecast model (which we’ll publish later).

[Related: An Updating Average Of 2020 Presidential General Election Polls]

Polling averages are a snapshot, not a forecast

The goal of our polling averages is to reflect the current state of the polling in each state, rather than to predict the eventual outcome. That is to say, our averages are a snapshot, not a forecast. Indeed, the way we calibrate various settings in the polling averages — such as how aggressive they are in responding to new data — is mostly based on how well the polling average predicts future polls,3 not how well they predict the outcome of the race.4

The polling averages will, of course, be a major ingredient in our forecast model. But there are times when they will differ from the forecast. For instance, parties typically get a boost in the polls following their national convention. However, this can be fleeting; the convention bounce usually fades over time. Our forecast will adjust for this, but the polling averages are a snapshot of the race today and will not.

In addition, our forecast model will blend the polls with other ways of projecting the outcome in each state, such as what happened in the previous election or its demographics. Our polling averages do not do this, however. They simply reflect the polls, albeit with a number of adjustments that I’ll describe later.

Which polls we include and how we weight them

One pillar of FiveThirtyEight’s philosophy is to include as many polls as possible, although we do use an algorithm that assigns a higher weight to polls with a higher pollster rating. That means we don’t exclude polls just because they’re outliers, because we think the polling firm is partisan, and for any similar reasons. Instead, we have a variety of strategies to make our polling averages more robust without having to cherry-pick data.

But as polling gets more complicated, there are an increasing number of edge cases where it may be unclear what constitutes a scientific poll. So there are some situations where data is excluded:

  • We don’t use polls that are banned by FiveThirtyEight because we know or suspect that the pollster faked data.
  • We don’t use DIY polls commissioned by nonprofessional hobbyists on online platforms such as Google Surveys. (Professional or campaign polls using these platforms are fine.)
  • We don’t treat subsamples of multistate polls as individual “polls” unless certain conditions are met.5
  • We don’t use “polls” that blend or smooth their data using methods such as MRP. These can be smart techniques — but if a pollster uses them, they’re really running a model rather than a poll. We want to do the blending and smoothing ourselves rather than inputting other people’s models into our own.
  • We exclude polls that ask the voter who they support only after revealing leading information about the candidates. If, for instance, a poll says “Joe Biden loves puppies. Who do you plan to support: Biden or Trump?” we won’t include it.
  • We exclude polls that test hypothetical candidates — for instance, a poll testing a hypothetical three-way race between Trump, Biden and Utah Sen. Mitt Romney.

There have also been some recent cases where media organizations that sponsor polls misrepresented what those polls actually say. We are working on policies for how to handle these.

However, we do include a campaign’s internal polls in our averages if they are released to the public. (This is a change from 2016, although something we implemented in our midterm forecast in 2018.) These internal polls are subject to a fairly harsh house effects adjustment though (see below), as historically, internal polls exaggerate their candidate’s margin by a net of around 5 percentage points in presidential races.

If you don’t see a poll listed, it may be that we simply haven’t gotten around to adding it yet, or that we are working with the pollster or the media sponsor to nail down certain information about it. We strongly encourage all press releases and stories about polls to include the following details, at a minimum: the dates the poll was conducted, the sample size, the sample frame (e.g. likely voters) and the firm responsible for conducting the poll. Please don’t hesitate to drop us a line if you think you’ve found a poll that we’re missing.

[Related: We’ve Updated Our Pollster Ratings Ahead Of The 2020 General Election]

Finally, a few notes on how our averages use multiple versions of the same poll and weight polls. Sometimes, the same poll will include multiple turnout models, or several versions of a question (i.e., with or without third-party candidates). When a poll has more than one turnout model, we always use the likely voter version of a poll before the registered voter version, and the registered voter version before the version conducted among all adults. However, if a pollster releases three turnout models and doesn’t designate any of them as the main one, we simply average the versions together.

As for how we weight polls, their weights are based on their sample size and pollster rating. The pollster ratings, in turn, reflect a combination of the pollster’s past performance and whether it meets current industry best practices. In addition, polls receive a penalty to their weight if they are conducted among registered voters or all adults rather than likely voters.6 And if a particular polling firm conducts a large number of polls in a state within a short period of time, the weight assigned to each of its polls during this period will be discounted. Thus, a pollster cannot “flood the zone” by releasing, say, 10 polls of Arizona all at once.

[Related: Our Pollster Ratings]

How we calculate our polling average

Once we’ve decided which polls to include and how to weight them, there are basically two ways you can calculate a polling average:

  1. You can take a simple average of recent polls, which is basically what RealClearPolitics does;
  2. Or you can use any of a variety of methods to calculate a trend line of the polls, as HuffPost Pollster formerly did.

FiveThirtyEight’s polling averages are basically a blend of these two techniques, which is slightly more accurate than using either method on its own. For most of the race, our average primarily relies on the averaging method, which is usually the more conservative of the two. (Although, unlike in the case of RCP, there’s not a hard cut-off for the date; rather, the weight assigned to each poll gradually ramps down to zero depending on the number of polls and how long ago it was conducted.) However, our average leans more heavily into the polynomial method of calculating a trend line in the final couple of weeks of the campaign. Thus, our polling averages can be fairly conservative for most of the race but more aggressive later on.

[Related: The Latest Political Polls Collected By FiveThirtyEight]

We also introduced a change with our presidential primary averages this year, where our method now recognizes that certain types of major events are more likely to produce changes in the polls. That means we now treat certain events as essentially spanning multiple days on the calendar. Specifically:

  • We treat each party’s convention as being equivalent to 15 days on the campaign trail.
  • A candidate clinching his or her party’s nomination counts as 10 days.7
  • A presidential debate is equivalent to six days.8
  • And the announcement of the nominee’s VP choice is four days.

Movement in the polls following these events is likely to be real and not statistical noise, so we made these tweaks so that the polling average responds more aggressively following them. Although, as I mentioned in the case of convention bounces, the polls can sometimes revert to the mean a few weeks later, so some of these event-based gains might still be short-lived.

The nitty-gritty on how we adjust polls

At this point, we want to give you a deep dive on what we’re adjusting for in our polling averages. Namely, we adjust polls in three ways: There’s a likely voter adjustment, a house effects adjustment and what we call a timeline adjustment that accounts for how recent a poll is.

First, the likely voter adjustment works by taking polls of registered voters or all adults and inferring what they would say if they were conducted among likely voters instead. The reason we do this is that almost all polls conducted in the closing weeks of the campaign are among likely voters — and likely voter polls are generally more accurate. But many polls earlier in the cycle, especially before Labor Day, are conducted among registered voters. So we’re trying to distinguish actual changes in the state of the race from changes that just reflect a pollster turning on its likely voter filter.

The way the likely voter adjustment works is by starting with historical priors based on the effects that likely voter screens tend to have, but then adjusting these priors as polls are released that provide direct comparisons of likely voter and registered voter versions of the same poll.9 For instance, if the same poll has Biden ahead by 6 points among registered voters but only up by 4 points among likely voters, that helps us to calibrate the adjustment. (In other words, we love it when pollsters publish both registered voter and likely voter numbers.)

Republican candidates generally tend to gain more ground from likely voter screens than Democratic ones do, since Republican voters tend to be older and whiter, which are characteristics associated with higher turnout. However, challengers — regardless of party — tend to gain ground in likely voter polls relative to incumbents, probably because some low-propensity voters may choose the incumbent as a default, whereas likely voters have thought through their choice more carefully. That means the effects could be somewhat offsetting this year — Trump is a Republican (which should help him among likely voters) but he’s also an incumbent (which should hurt him).

Still, the effects from partisanship are slightly stronger than the effects from incumbency, and Trump has been doing slightly better in likely voter polls than in registered voter polls so far. Thus, likely voter screens may slightly improve Trump’s position overall — perhaps by a percentage point or so relative to Biden.10

Next, our house effects adjustment attempts to correct for polls that consistently lean toward one candidate or the other. In our presidential approval averages, for instance, the polling firm Rasmussen Reports has a strong pro-Trump house effect, as Trump tends to have a higher approval rating in those polls than in those from other pollsters. To be clear, there isn’t necessarily anything wrong with having a house effect; sometimes, an “outlier” poll turns out to be right. But adjusting for house effects makes polling averages more stable.

[Related: An Updating Calculation Of The President’s Approval Rating]

However, the mechanics of house effects can quickly become complicated, especially if you’re looking at polls across multiple states. For instance, if a certain polling firm has a 3-point pro-Biden house effect in its poll of Colorado, should we also assume it has a 3-point pro-Biden effect when it polls in, say, Pennsylvania?

In past years when calculating house effects, we applied a constant house effects adjustment to all of a firm’s polls, regardless of what state they were conducted in. But after extensively testing that assumption, we found that this strategy doesn’t actually improve the accuracy of your average very much. In fact, it can potentially even make your polling averages less accurate.

I’ll skip the gory details, but if you’re not careful, what can wind up happening is that your averages become anchored to prolific pollsters that poll across many states,11 and pollsters that focus on just one or a handful of states end up having their numbers adjusted toward these more prolific pollsters. This is a problem because a lot of the information we gain from local pollsters is lost in this process, and pollsters that survey just one state or one region often do a really good job of it.

For example, a lot of Missouri-specific pollsters (correctly, it turned out) had Republican Josh Hawley ahead of Democrat Claire McCaskill in that state’s U.S. Senate race in 2018 while other pollsters (such as Marist College) that conduct polls in many states had McCaskill leading. Our house effects adjustment ended up shifting all the local polls toward McCaskill, putting her ahead in our average when a straight average would have shown Hawley ahead … andHawley eventually won by 6 percentage points.

Thus, we’ve changed our house effects adjustment so that it mostly reflects how a poll compares to others in the same state.12

A few technical notes about our house effects adjustment:

  • The adjustment is more aggressive for firms that have done more polling. For instance, if a firm had an apparent 5-point pro-Biden house effect in Wisconsin, but this was only based on one poll, it would be hard to say whether this reflected an actual house effect or if the firm had just happened to come up with an outlier. That’s why our adjustment is more aggressive in cases where a firm has conducted many polls.
  • Polling firms also vary in how many undecided and third-party voters they tend to include. Some pollsters often publish results such as Biden 50, Trump 49, for example, with few undecideds. Our house effects adjustment also adjusts for this.13
  • For a campaign’s internal polls, our algorithm starts with the prior that they do have a fairly strong house effect favoring the party conducting the poll. This differs from nonpartisan polls, where we start with a prior that the house effect is zero. In both cases, the house effects adjustment moves away from the prior as we collect more data.

Finally, we apply a timeline adjustment based on the recency of the poll14 which adjusts for shifts in the overall race since a poll was conducted. For instance, say that a poll of Arizona last month showed Biden up 3 points there, but there’s been a strong shift toward Trump since then in national polls and in polls of similar states such as Nevada. This adjustment will shift that older Arizona poll toward Trump.

Of course, it would be better to have a new Arizona poll instead of having to adjust the old one. But sometimes, a key swing state can go weeks with little or no polling. (Especially as shrinking media budgets force pollsters to pick and choose their battles more; there was a dearth of high-quality polls in Michigan and Wisconsin toward the end of the 2016 race, for example.) So this adjustment mostly matters when the polling in a state is “stale”; it considerably improves accuracy in these cases. But it doesn’t have much of an effect when there is a lot of recent polling in a state.

The way this adjustment works is that our program examines the trends in national polls and in polls of states that are similar based on our CANTOR scores. So, for instance, the polls of similar states such as Wisconsin and Ohio will have more influence on the adjustment of polls in Michigan than will polls of dissimilar states such as California or Mississippi. National polls also have a major influence on this timeline adjustment, simply because there are a lot of them, so they’re often the easiest way to detect a trend.

[Related: How To Read Polls In 2020]

Additionally, this adjustment also accounts for the elasticity of each state, or how responsive it is to the national environment. Some states (such as New Hampshire) tend to be “swingier” than others because they have a lot of swing voters. So if national polls move by, say, 4 percentage points toward Trump over a particular period of time, we might expect them to move by more than that (perhaps 5 points) in New Hampshire.

Other states are relatively inelastic and tend not to swing as much. Georgia, for instance, has a lot of African American voters and young urban professionals who are heavily Democratic, and a lot of older white evangelicals who are heavily Republican. So even though Georgia has become increasingly competitive as the number of young, college-educated professionals grows, there aren’t actually that many swing voters there, so its polls tend to be fairly stable.

Elasticity scores for 2020, are based on an examination of individual-level polling data from the exit polls in 2008 and from the Cooperative Congressional Election Study in 2012 and 2016, are as follows:

Every state’s elasticity score

Updated for 2020

StateElasticity
New Hampshire1.28
Rhode Island1.26
Maine 1st District1.25
Vermont1.23
Maine1.17
Massachusetts1.17
Hawaii1.15
Iowa1.13
North Dakota1.11
Idaho1.10
West Virginia1.10
Maine 2nd District1.09
New Mexico1.09
Colorado1.09
Connecticut1.09
Nevada1.08
Alaska1.07
Arizona1.07
Oregon1.07
Wisconsin1.06
Washington1.06
Nebraska 2nd District1.06
Montana1.05
Kansas1.04
Florida1.04
New Jersey1.04
Nebraska 3nd District1.03
South Dakota1.03
Michigan1.03
Ohio1.02
Nebraska1.02
Utah1.02
Arkansas1.02
Texas1.02
Missouri1.01
Minnesota1.01
Indiana1.00
Kentucky1.00
Tennessee0.98
Illinois0.98
Pennsylvania0.97
Nebraska 1st District0.97
California0.96
New York0.96
Wyoming0.95
North Carolina0.94
Louisiana0.93
Oklahoma0.93
Virginia0.92
Delaware0.90
South Carolina0.88
Maryland0.87
Georgia0.84
Alabama0.81
Mississippi0.79
District of Columbia0.62

Note that swingier states tend to be white and relatively irreligious. Black voters are generally the most reliable Democratic voters, while white evangelical Christians are generally the most reliable Republican ones. So states such as New Hampshire that have neither many Black voters nor many evangelical Christians tend to be elastic.

That’s it for now! But please drop us a line if anything seems wrong. We do discover bugs from time to time when we’ve launched a new product, and tips from readers are invaluable in catching those.

All VideosYouTube

Our New Polling Averages Show Biden Leads Trump By 9 Points Nationally (2024)
Top Articles
Latest Posts
Article information

Author: Kimberely Baumbach CPA

Last Updated:

Views: 6028

Rating: 4 / 5 (41 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Kimberely Baumbach CPA

Birthday: 1996-01-14

Address: 8381 Boyce Course, Imeldachester, ND 74681

Phone: +3571286597580

Job: Product Banking Analyst

Hobby: Cosplaying, Inline skating, Amateur radio, Baton twirling, Mountaineering, Flying, Archery

Introduction: My name is Kimberely Baumbach CPA, I am a gorgeous, bright, charming, encouraging, zealous, lively, good person who loves writing and wants to share my knowledge and understanding with you.