BigBallinStalin wrote:And since it doesn't include those target markets, then the only people polled are those who tend to use land lines and are probably around their house enough to receive the call (i.e. old people who have nothing better to do, other than sitting around the house). So, the results are very limited in scope.
Therefore, the "new national poll" is mostly rubbish and is misleading because the scope of their statistics is confined to a small segment of the national population--and not the nation itself.
That's why I say that the statistics are dishonest.
I follow. Which is why I said then all statistics are dishonest. Because no poll will poll people with land lines, and cell phones, and internet only, and the homeless without any phones etc etc etc.
Wikipedia refers to this as Coverage Bias:
http://en.wikipedia.org/wiki/Opinion_pollI'm not going to argue about the poll being invalid, because then every poll is invalid.
Coverage bias
Another source of error is the use of samples that are not representative of the population as a consequence of the methodology used, as was the experience of the Literary Digest in 1936. For example, telephone sampling has a built-in error because in many times and places, those with telephones have generally been richer than those without.
In some places many people have only mobile telephones. Because pollsters cannot call mobile phones (it is unlawful in the United States to make unsolicited calls to phones where the phone's owner may be charged simply for taking a call), these individuals will never be included in the polling sample. If the subset of the population without cell phones differs markedly from the rest of the population, these differences can skew the results of the poll. Polling organizations have developed many weighting techniques to help overcome these deficiencies, to varying degrees of success. Studies of mobile phone users by the Pew Research Center in the US concluded that "cell-only respondents are different from landline respondents in important ways, (but) they were neither numerous enough nor different enough on the questions we examined to produce a significant change in overall general population survey estimates when included with the landline samples and weighted according to US Census parameters on basic demographic characteristics."[12]
This issue was first identified in 2004,[13] but came to prominence only during the 2008 US presidential election.[14] In previous elections, the proportion of the general population using cell phones was small, but as this proportion has increased, the worry is that polling only landlines is no longer representative of the general population. In 2003, a 2.9% of households were wireless (cellphones only) compared to 12.8 in 2006.[15] This results in "coverage error". Many polling organisations select their sample by dialling random telephone numbers; however, there is a clear tendency for polls which included mobile phones in their sample to show a much larger lead for Obama than polls that did not.[16][17]
The potential sources of bias are:[18]
Some households use cellphones only and have no landline. This tends to include minorities and younger voters; and occurs more frequently in metropolitan areas. Men are more likely to be cellphone-only compared to women.
Some people may not be contactable by landline from Monday to Friday and may be contactable only by cellphone.
Some people use their landlines only to access the Internet, and answer calls only to their cellphones.
Some polling companies have attempted to get around that problem by including a "cellphone supplement". There are a number of problems with including cellphones in a telephone poll:
It is difficult to get co-operation from cellphone users, because in many parts of the US, users are charged for both outgoing and incoming calls. That means that pollsters have had to offer financial compensation to gain co-operation.
US federal law prohibits the use of automated dialling devices to call cellphones (Telephone Consumer Protection Act of 1991). Numbers therefore have to be dialled by hand, which is more time-consuming and expensive for pollsters.
An oft-quoted example of opinion polls succumbing to errors was the UK General Election of 1992. Despite the polling organizations using different methodologies virtually all the polls in the lead up to the vote, and to a lesser extent exit polls taken on voting day, showed a lead for the opposition Labour party but the actual vote gave a clear victory to the ruling Conservative party.
In their deliberations after this embarrassment the pollsters advanced several ideas to account for their errors, including:
Late swing
Voters who changed their minds shortly before voting tended to favour the Conservatives, so the error was not as great as it first appeared.
Nonresponse bias
Conservative voters were less likely to participate in surveys than in the past and were thus under-represented.
The Shy Tory Factor
The Conservatives had suffered a sustained period of unpopularity as a result of economic difficulties and a series of minor scandals, leading to a spiral of silence in which some Conservative supporters were reluctant to disclose their sincere intentions to pollsters.
The relative importance of these factors was, and remains, a matter of controversy, but since then the polling organizations have adjusted their methodologies and have achieved more accurate results in subsequent elections.