Survey Best Practice - Why Bigger Is Not Always Better
Surveys have often suffered from a focus on “bigger is better”. Today, internet based surveys are promoted as “fast, low cost and informative”, and the low cost has often led to large sample sizes that give an impression of accurate results.
The fallacy of large sample sizes giving accuracy was dramatically shown in the polls of the 1936 US presidential election. The Literary Digest, a well-respected magazine and one that had a history of accurately predicting the winners of presidential elections, conducted its poll by sending out 10 million post cards asking people how they would vote. Approximately 2.3 million were returned. The overwhelming conclusion from this poll was that Alfred Landon (57%) would win over Franklin Roosevelt (43%). History tells us this was not the case and Roosevelt won convincingly over Landon. What went wrong?
The problem was a failure to follow the statistical principle of representativity. The Literary Digest survey was precise, in statistical terms, because of its enormous sample size. This means that if this survey was conducted again it would have achieved a very similar, but equally incorrect, result. But it failed to ensure the sample was representative of the electorate. In the same 1936 polls, a fledgling company, The Gallup Organisation, correctly predicted the outcome of the election with a sample of just 50,000. The difference was in the selection of the sample. While the Literary Digest relied on mail-in ballots, Gallup sent pollsters to talk to people in person, ensuring a much more representative sample of the population.
This brings us to the idea of what best survey practice requires, where many factors in the design and implementation need to be balanced, and choices made. Large sample sizes will give greater repeatability, but not necessarily greater accuracy. Practical survey design invariably involves trade-offs between size and budget, accuracy and repeatability, timing and quality.
Just as important, survey best practice needs to constantly adapt to the changing circumstances within the population to ensure representativity is achieved. Gallup made accurate predictions of the next two elections, however the 1948 election threatened to undo everything. Three polling organisations – Gallup, Roper and Crossley incorrectly predicted a win for the Republican, Thomas E. Dewey. The survey community conducted a post mortem, with a committee of statisticians, political scientists, sociologists and others, leading to major changes in how polls were conducted. Two issues identified were an over reliance on quotas for particular demographics rather than using true random samples and the failure to consider whether individual respondents would actually vote.
Today, the challenge of maintaining best practice is still with us. In the most recent US presidential elections, polling firms that used traditional telephone polls with landline only samples had results skewed to the Republican Party. Whereas pollsters who widened their approach by including mobile phone numbers and the Internet to survey the public, were more accurate in projecting the outcome. Despite the considerable issues associated with integrating mobile phone and internet surveys with traditional landline surveys, they can be extremely valuable, particularly in improving the representativity of the sample. This in turn can dramatically improve the overall value of the survey.
Data Analysis Australia focuses on best practice and providing the right advice to our clients. This means spending the time to make the right decisions on sampling, questionnaire design, implementation and analysis. Good practice may result in higher costs per response, however the overall cost of the survey may actually be lower. This is because clever statistical sampling can negate the need for large sample sizes, while good practice in questionnaire design and analysis can significantly enhance the value of the results.
Data Analysis Australia’s reputation in this area is evidenced by us being commissioned by the Australian Market and Social Research Society to develop a ‘Best Practice Guide’ on sampling and weighting for its members, a Guide which has just been released.
December 2012