‘We gave four good pollsters the same raw data. They had four different results.’

Nearly every polling articles mentions the “margin of error,” but what’s rarely explained is that the “margin of sampling error” doesn’t capture the potential for error in surveys, Nate Cohn writes. Two pollsters looking at the same raw data can come to widely different conclusions, Cohn says, because of the decisions pollsters have to make about things such as likely voters or adjusting respondents to match a state’s demographics. To illustrate that point, The Upshot gave four pollsters the same raw data from a survey of 867 likely Florida voters: Between the measures of the four pollsters and The Upshot’s own analysis, there was a net 5-point difference.