CNN’s exit polls say that independents were the deciding factor in the two big governor’s races, in New Jersey and Virginia. From the network’s web site: In Virginia, where 30 percent of voters identify themselves as independent, 65 percent cast their ballots for CNN’s projected winner, Republican Bob McDonnell. That’s…
This is my response to the post, below, of the government professors who operate the UT Poll. I have no desire to get into a verbal wrestling match with Professors Henson and Shaw. I have spoken to both of them, in person and by telephone, and I respect their work and what they are trying to accomplish. They are entitled to run a poll the way the want to run it, and I am entitled to give my opinion of its worth. 1. In my opinion, Internet polling is not a proven methodology. In particular, Zogby's Internet polls are useless. There are too many things about Internet polling that are eccentric. Would you put more faith in a poll of registered voters if you determined a person's status from an Internet interview or from a voter registration list? Internet polling does not use interviews. It locates participants. 2. I did not write about the part of the poll that involved the U.S. Senate race. The poll presented participants with brief biographies of selected candidates. How in the world could the pollsters control for bias? If this is not a push poll, it is certainly a cousin.
In Paul Burka's latest post, he questions the methodology behind the poll conducted by the Department of Government and the Texas Politics project at UT. Here is their response, in full (courtesy of professors Jim Henson and Daron Shaw). In the Friday afternoon Texas Monthly podcast, in a post on his blog the following day, and in the comment fields following that entry, Paul Burka made a series of inaccurate characterizations of the poll released by the Department of Government and the Texas Politics project last week. Consequently, we feel compelled to respond. In so doing, we hope to give Mr. Burka, readers of the blog, and the broader public a clearer idea of how the poll works. Mr. Burka’s skepticism concerning some of our results seems based on a combination of his misunderstanding of where our sample comes from and how we use the Internet to administer the survey. Let us begin by explaining our decision to conduct an online survey. Put another way, why didn't we just do another phone poll? In our view, the issues preventing effective online polling are receding while those plaguing traditional phone polling are becoming increasingly troublesome. In particular, phone polls have had lower response rates in recent years, which exacerbate widely recognized response biases. Weighting the data is the typical response, but how reliable are estimates when you have to weight low incidence populations (for example, young African American males) by a function of 8 or 12 or even 16? Perhaps more problematic is the spread of cell phone use and the decline of landlines. Finally, talking to people over the phone also places constraints on the sort of question frames and response options you can use; these problems are reduced or removed when you use the web.
Journalists and other notables to give us their reactions to the long campaign and the election of Barack Obama.