This article is from the source 'guardian' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at http://www.guardian.co.uk/commentisfree/2013/may/08/mark-sanford-win-south-carolina-bad-polling

The article has changed 2 times. There is an RSS feed of changes available.

Version 0 Version 1
After Mark Sanford's win, embarrassed pollsters go back to the drawing board After Mark Sanford's win, embarrassed pollsters go back to the drawing board
(5 months later)
Mark Sanford is heading off to Washington DC, while pollsters are heading back to their workshops. His victory by 9pt over Elizabeth Colbert Busch in South Carolina's first district special election was surprisingly large. The last two public polls from Public Policy Polling (PPP) and Red Racing Horses (RRH) had Sanford winning by 1pt and a tie, respectively. PPP published a poll just two weeks before the election that had Colbert Busch winning by 9pt.Mark Sanford is heading off to Washington DC, while pollsters are heading back to their workshops. His victory by 9pt over Elizabeth Colbert Busch in South Carolina's first district special election was surprisingly large. The last two public polls from Public Policy Polling (PPP) and Red Racing Horses (RRH) had Sanford winning by 1pt and a tie, respectively. PPP published a poll just two weeks before the election that had Colbert Busch winning by 9pt.
All three reports have joined the top 10 least accurate polls within two weeks of a special election, since 2004. PPP's first poll was especially bad. It had an error of 18pt, which makes it the second least accurate poll taken two weeks before a special election since 2004. As my friend Mark Blumenthal points out, this first PPP survey had far too many African Americans as a percentage of the electorate. I don't doubt that some white voters, a mostly Republican demographic in South Carolina's first district, were disenchanted with Sanford by allegations that he violated the terms of his divorce, but the difference in the percentage of black voters was too great. Colbert Busch never had a lead of 9pt. One might wonder whether she even had a lead.All three reports have joined the top 10 least accurate polls within two weeks of a special election, since 2004. PPP's first poll was especially bad. It had an error of 18pt, which makes it the second least accurate poll taken two weeks before a special election since 2004. As my friend Mark Blumenthal points out, this first PPP survey had far too many African Americans as a percentage of the electorate. I don't doubt that some white voters, a mostly Republican demographic in South Carolina's first district, were disenchanted with Sanford by allegations that he violated the terms of his divorce, but the difference in the percentage of black voters was too great. Colbert Busch never had a lead of 9pt. One might wonder whether she even had a lead.
These errors might make people think twice about trusting PPP and RRH. After all, many major news organizations won't cite PPP because it uses interactive voice response (IVR) technology instead of live interviewers, and because it doesn't call cell phones. RRH doesn't use live interviewers or call cells either, and it certainly doesn't have a long track record; it's apparently run by people who have no real background in polling. It wasn't a surprise, therefore, that the RRH poll had women as too great a percentage of the electorate, at 60% versus the about 55% it should have been.These errors might make people think twice about trusting PPP and RRH. After all, many major news organizations won't cite PPP because it uses interactive voice response (IVR) technology instead of live interviewers, and because it doesn't call cell phones. RRH doesn't use live interviewers or call cells either, and it certainly doesn't have a long track record; it's apparently run by people who have no real background in polling. It wasn't a surprise, therefore, that the RRH poll had women as too great a percentage of the electorate, at 60% versus the about 55% it should have been.
The truth, however, that PPP's and RRH's final polls seem to have been more accurate than the private (or internal) polls which are surveys produced by the parties and candidates. Most, though not all, use live interviewers and call mobile phones, and often, unreleased internal polls are more accurate than your average public poll. Most of the private polls for this race actually showed Colbert Busch holding a small lead.The truth, however, that PPP's and RRH's final polls seem to have been more accurate than the private (or internal) polls which are surveys produced by the parties and candidates. Most, though not all, use live interviewers and call mobile phones, and often, unreleased internal polls are more accurate than your average public poll. Most of the private polls for this race actually showed Colbert Busch holding a small lead.
All of this is to say that all the polling stunk it up in South Carolina's first. Republican turnout wasn't depressed as most thought it would be, and Republican voters did ultimately pull the lever for Sanford. Most of the undecided voters were Republicans, and there's a reason PPP started to see that more white voters would vote than prior surveys indicated.All of this is to say that all the polling stunk it up in South Carolina's first. Republican turnout wasn't depressed as most thought it would be, and Republican voters did ultimately pull the lever for Sanford. Most of the undecided voters were Republicans, and there's a reason PPP started to see that more white voters would vote than prior surveys indicated.
That's why I will continue to pay attention to PPP and RRH in the future. Yes, PPP having Colbert Busch up nine was an embarrassment, but no one did better than PPP's final or RRH's only poll. The fact that even private pollsters fumbled so badly suggests that nobody who used better techniques could have been more accurate. Polling special elections just isn't easy, as there really isn't a baseline to understand who will turn out to vote and who will stay home.That's why I will continue to pay attention to PPP and RRH in the future. Yes, PPP having Colbert Busch up nine was an embarrassment, but no one did better than PPP's final or RRH's only poll. The fact that even private pollsters fumbled so badly suggests that nobody who used better techniques could have been more accurate. Polling special elections just isn't easy, as there really isn't a baseline to understand who will turn out to vote and who will stay home.
Besides, the polling was useful, though imperfect. We knew that Sanford wouldn't come close to replicating Mitt Romney's 18pt win in the district, for instance. Even if the overall result was off, we learned some nuances of county polling, where the differences of support for each candidate were greater than expected. Thanks to RRH's survey, we had a better idea on how counties would vote relative to each other, compared to the old method of just applying a uniform swing off the 2012 results. That doesn't mean RRH is a great pollster, or even a particularly competent pollster, but it does suggest that almost any poll data can often be better than just going off the "fundamentals".Besides, the polling was useful, though imperfect. We knew that Sanford wouldn't come close to replicating Mitt Romney's 18pt win in the district, for instance. Even if the overall result was off, we learned some nuances of county polling, where the differences of support for each candidate were greater than expected. Thanks to RRH's survey, we had a better idea on how counties would vote relative to each other, compared to the old method of just applying a uniform swing off the 2012 results. That doesn't mean RRH is a great pollster, or even a particularly competent pollster, but it does suggest that almost any poll data can often be better than just going off the "fundamentals".
It's no surprise that you're going to continue see websites like HuffPollster and Real Clear Politics report on surveys like PPP and RRH, which don't meet the highest standards in the world. Polling data, even just okay data, can tell us a lot. In this case, the "flawed" public data was as good as the private data, and it was better than not looking at any polling at all.It's no surprise that you're going to continue see websites like HuffPollster and Real Clear Politics report on surveys like PPP and RRH, which don't meet the highest standards in the world. Polling data, even just okay data, can tell us a lot. In this case, the "flawed" public data was as good as the private data, and it was better than not looking at any polling at all.
Our editors' picks for the day's top news and commentary delivered to your inbox each morning.