A Polling Primer

Why Take a Poll?

Public opinion polls to provide guidance to businesses, interest groups or candidates as they plan and conduct campaigns to sway public attitudes to their particular point of view.  Corporations are often interested in the standing of an issue or idea, or what the public is thinking about a product, service or of their own corporate image. Advertising agencies often use polls to monitor the effectiveness of ad campaigns and advertising strategies. And as we know, polls tell political candidates whether or not their messages are resonating with the public, whether an opponent's attacks have had a damaging effect, or whether there is an undercurrent of concern or interest in a particular issue that could change minds or motivate voters.  But contrary to common belief, few politicians rely on polls to 'tell them what to think'.

It's Not a Crystal Ball

It is a common misconception that a poll can predict the outcome of an election. Nothing could be further from the truth. Any poll, no matter how sophisticated, is merely a snapshot in time. Conclusions that might be accurate within a very small margin of error today might (and do) change substantially within days or certainly within weeks. One poll, therefore, will provide information that is most relevant at the time it is taken, but as time passes, that information becomes less and less useful. A sensible strategy would, therefore, be to conduct a research program consisting of several surveys designed to establish a baseline and then to track results as the campaign progresses. It is the tracking that provides a 'roadmap in time' showing campaign trendlines. It is the trendline derived from tracking, rather than the results from any single poll, that allows a public opinion analyst to draw conclusions as to the probable outcome on election day.

   
   
The graph above shows the tracking chart of a recent Maine referendum question. Tracking began on October 22, 1999 and ended with the final pre-election sample taken on November 1, 1999.  A single poll taken as late as the 4th week of October would have projected an incorrect outcome and would have entirely missed the crossover which occurred about October 29-30.
   

Benchmark Polls

We generally conduct three distinct types of survey. The first, sometimes referred to as a 'benchmark' survey, usually consists of 35 to 55 questions and at least 400 completed interviews. Its purpose is to identify and probe prevailing attitudes toward a candidate or referendum question, to identify influence vectors and authority figures that could influence the outcome, and to cross-reference the entire survey against a standard set of demographic categories in order to determine how various population segments are responding. Benchmark polls are absolutely essential to any campaign strategist because they provide the standard against which progress is measured as the campaign proceeds.

Tracking Polls

The second type, known as a 'tracking' poll, usually consists of 5 to 15 questions. Rollover interviews are conducted at the rate of 50 to 150 calls nightly throughout a specified period (for example, throughout the last 4 weeks of a campaign). After an established base number of calls has been accumulated (typically  400), the oldest batch of calls is dropped as the newest batch is added, always maintaining the base number. The results are then analyzed and graphed each evening, yielding a daily tracking report. This technique is extremely helpful for gauging the impact of ads, issues, breaking news, attacks or debates. See the example charted above.

'Snapshot' Polls

The third type, sometimes called a 'snapshot' poll, may typically consist of about 200 calls conducted over a brief period (two days at most), and is usually limited to 4 or 5 questions. This is really a 'snapshot' in time and is mostly useful for taking a closer look at a specific question or demographic breakout, usually as a probe of an anomalous tracking result.

Accuracy and Margin of Error

An accurate sample is essential to conducting a meaningful survey and it must be designed to a reliable cross-section of the population to be polled. This requires that calls be made in a truly random fashion so that no demographic group is systematically over-sampled or under-sampled. Knowing how to construct such a sample is a key skill fundamental to the success and credibility of any polling firm.

Margin of error and accuracy are generally misunderstood by most people who are unfamiliar with statistical techniques. Margin of error is a function of the number of interviews conducted, but is surprisingly independent of the number of people in the entire survey area. That is why 400 calls can be an accurate mirror of an entire state, despite protestations to the contrary from the doubting public. Margin of error is calculated using a statistical model known as 'binomial distribution'. This is a measure of the probability that the frequency of a particular answer in any sufficiently large group of responses will cluster around the 'correct' frequency of that response in the entire population. 'Binomial distribution' is the close cousin of 'normal distribution', which you may have heard of  referred to as the 'bell-shaped curve'.

It is also necessary that a question or candidate to be tested have a significant level of support; it will be readily understood, for example, that there can be little precision in analyzing 5 responses out of a total of 400 calls. Therefore, we usually calculate margins of error as if a given candidate or question had about 50% support. 

Exact calculations are complicated but are easily done by computer. The simple chart below shows the margin of error to be expected in a theoretical survey, at the 95% confidence level, wherein the tested question (or candidate) was selected by the respondent in nearly 50% of the interviews:    

      
        
100 calls 9.80% at the 95% confidence level
150 calls 8.00% at the 95% confidence level
200 calls 6.93% at the 95% confidence level
250 calls 6.20% at the 95% confidence level
300 calls 5.66% at the 95% confidence level
350 calls 5.24% at the 95% confidence level
400 calls 4.90% at the 95% confidence level
450 calls 4.62% at the 95% confidence level
500 calls 4.38% at the 95% confidence level
550 calls 4.18% at the 95% confidence level
600 calls 4.00% at the 95% confidence level
700 calls 3.70% at the 95% confidence level
800 calls 3.46% at the 95% confidence level
900 calls 3.27% at the 95% confidence level
1000 calls 3.10% at the 95% confidence level
        
   
This table should be interpreted as follows, using a poll consisting of 400 calls statewide as an illustrative example:
     
"If a survey consisting of 400 interviews was to be conducted 100 different times statewide, the survey results would not vary from actual public opinion in the state (that is, from the results one would obtain if every person in the entire state was interviewed) by more than 4.90%, in at least 95 of the 100 surveys."
   
How It's Done
   
Although there are other techniques in use, Scientific Marketing & Analysis relies substantially on random telephone polling. SMA has determined that this method is both cost-effective and easily monitored to ensure conformance with the statistical demographic model and for quality control. Telephone numbers are chosen at random from the area to be surveyed in such a way as to guarantee a statistically valid cross-section of the region. Since SMA usually asks a battery of demographic questions, the calling result can be compared with known demographic patterns thus ensuring that key groups are neither over-sampled nor under-sampled. During the sampling process, SMA monitors such factors as the mix of registered voters and persons who are not voters; population balance between geographical areas; and persons with unlisted telephone numbers. Scientific Marketing also designs the calling script so that it is understandable and so that it does not cause 'interview fatigue' and thus degrade the results.

The callers record the results by computer which are then transmitted to SMA computers for analysis. SMA then subjects the raw numbers to a series of analytical tests to filter out data entry or similar errors. Once this has been completed, SMA uses its suite of proprietary statistical tools to produce a printout of aggregate results for each question and a complete set of data cross-tabulations. From this data, a report is prepared, complete with data tables and graphs as appropriate, providing the client with a detailed analysis of his polling inquiry.