Understanding Political Polls: A Key Citizenship Skill for the 21st Century

 

Kevin Pobst

Dick Morris, President Clinton’s now infamous pollster, recently turned a spotlight on the role of polling in politics when he was widely quoted as saying that “we used polling not to determine which positions he [Clinton] would take, but to figure out which of the positions he had already taken were the most popular.”1 To critics of the role of polling in politics, such candor confirmed their worst fear—that polls are being used to determined a leader’s agenda. But to others, Morris’s admission was heartening; although he was admitting that Clinton polled incessantly, we were assured that polls were not determining the president’s positions on the issues, but just helping him to prioritize them. All in all, Morris’s comment highlighted how important polling has become in American politics.

This article takes a look at the nature and effects of modern political polling and provides lessons aimed to help students understand and develop skills related to polling as preparation for their roles as citizens in a democracy.

The Pollsters

Who Are These Guys?

Dozens of campaign consultancy firms provide polling services to candidates. Campaigns and Elections magazine’s online edition lists the firms—pollsters as well as specialists in media, direct mail, campaign management, telemarketing, and opposition research—hired by Senate and House candidates in 1998.2 Scanning the list reveals a dizzying catalog of pollsters for sale. The pollsters used by political candidates, however, are not the household names of polling. Louis Harris and George Gallup, names familiar to many Americans, head independent survey organizations that compile data mainly for business and media clients. In this realm of “public polling,” Harris and Gallup are joined by major newspapers and networks—for example, The New York Times/CBS, Time/CNN, ABC News/ Washington Post—that constantly conduct polls so they can report the “news” revealed in responses to questions about the very latest issues.

Some people also associate polling with such high-profile academic and non-profit research centers as the Pew Research Center for the People and the Press, the National Opinion Research Center (NORC) of the University of Chicago, or the Roper Center for Public Opinion Research at the University of Connecticut. These organizations gather data for policymakers and maintain high standards for scientific research on public opinion. With some disdain for the crowded field of partisan political consultants, NORC says that it conducts “social survey research, not polls.”3

But political candidates tend not to rely on the publicly-reported poll results we read in our daily papers. Rather, the pollsters with the most impact on American politics are the myriad for-hire consultants and firms who sell their services to individual candidates. Many of these firms are partisan. In order to build trust and credibility, they poll only for candidates from one party. The Tarrance Group polls only for Republicans, whereas Celinda Lake polls exclusively for Democrats. Most of the major firms in the campaign polling business are as invisible to the public—even those who follow politics closely—as is the caterer at a candidate’s victory party. 4

 

How Much Do They Cost?

Polls vary a great deal in cost. A small statewide poll can cost a candidate as little as $3,000, while an extensive and detailed national poll may cost as much as $250,000. A district-wide benchmark poll (see “The Benchmark Pol#148; below) costs a congressional candidate anywhere from $12,000 to $25,000 depending on the amount of detail, the size of the sample, and the reputation of the polling organization. A congressional campaign may spend as little as $15,000 or as much as $100,000 on a series of polls.5

Some candidates will try to save money by conducting their own polls, but these in-house polls lack the credibility and dependability of professional surveying. Polls conducted inside a campaign may be designed to plant stories or unfavorable ideas about the candidate’s opponent in the press. The results of such polls may look “cooked,” and in fact, some are.6

Though the amount a candidate spends on polling varies a great deal depending on the nature of the contest, a typical serious contender for a House seat will spend from 5-10% of the campaign budget on polling.7 This expense is dwarfed by the amount spent on media and direct mail. However, media and direct mail spending are made much more efficient by the use of poll data. Polling, after all, is only useful as part of a campaign’s overall “research” strategy.

 

What Are They Looking For?

Polling is part of the research of any effective campaign. Research starts with knowing who the voters are and what they care about. Firms around the country specialize in selling files that identify who votes and who doesn’t. Candidates care a lot more about what likely voters think than what the public in general thinks (see “Likely Voters” below).

A campaign can identify the voters who went to the polls in all, some, or none of the recent primaries. From this data, it can compile lists of likely, probable, and less likely but possible voters. Once a list of likely voters is in hand, the campaign uses it to conduct detailed benchmark polls and other surveys. Moreover, by cross-referencing the voter lists with membership lists of different national and local organizations, a campaign can also identify issue inclinations on the part of voters.

How Do They Find It?

Political pollsters rely completely on the telephone. Telephone contacts can be made quickly and with fewer biasing factors than is the case with face-to-face interviewing. The secret to the value of any survey lies in the quality of the “sample.” Many media and academic polls use “random digit dialing” where the numbers to be called are generated by a computer. Some surveyors have their telephone banks dial the numbers created through random selection without excluding such inappropriate numbers as car dealerships, laundromats, or phone booths. For political polling, a higher quality sample excludes numbers known to be businesses or offices.

For many years, random digit dialing was viewed as superior to using published phone numbers since, in many parts of America, from one fifth to one half of the telephones are unlisted. Some groups—single women, for example—have higher rates of unlisted phone numbers than does the general population. And some income and ethnic groups are under- or over-represented when random digit dialing is used.

Most polling is done in the evening, since calls made during the day may be skewed toward women and the elderly. There are also patterns to who answers the phone. For example, young single adults and women in a household answer the phone more often than do men or elderly residents. Therefore, pollsters using random digit dialing try to randomize which adult they speak with by asking a screening question such as, “May I speak to the adult who had the most recent birthday?” Often pollsters find no one at home and, more and more often, they encounter answering machines. Pollsters call back repeatedly, because tossing out immediately non-responsive numbers would bias the survey. There is no reason to suppose that ownership of answering machines or absence from home in the evening is random.

 

“Likely Voters”

Political pollsters don’t really want “public opinion” as it is ascertained from a randomly generated list of householders. They want to know what likely voters think. Therefore, political pollsters prefer a sample limited to likely voters. The opinions of a group of non-voters may be very different from the opinions of a group of voters. This is expecially true in primary elections, where a very small percentage of the electorate participates, and those who do often have intense opinions on controversial issues or strong party loyalty. The problem is that “identifying likely voters is still a mystery to most polling organizations,” observes Warren Mitofsky, one of the pioneers of random digit dialing in the 1970s.8

Some polling firms attempt to create a “likely voter sample” by screening those who respond to calls from a list of randomly generated numbers. Screening questions that ask respondents to say whether they intend to vote, are registered to vote, or voted in the last election, are commonly used. But other pollsters distrust respondents’ answers to these questions. They suspect that many people do not want to admit they are not registered or are indifferent to the election. Even though the calls are anonymous, people may want to create a good image for themselves in the mind of the stranger on the other end of the line.

Many political pollsters therefore reject random digit dialing. Rather, (as indicated above), they purchase lists of registered voters that indicate in which recent elections a person voted. From these lists, pollsters can create a “likely voter sample.” Surveying is then conducted randomly, but from among the smaller field of known likely voters.

 

Polling Errors

A reliable poll sample depends on the size of the “universe” sampled. In national polls, a sample should include 1,500 to 3,000 successful responses. In statewide polls, a good sample should contain 600 to 1,200 responses. Standard calculations related to the pollster’s level of confidence in the sample determine “margin of error.” If a poll of 150 responses has a margin of error of plus or minus 6%, and one finding from that poll is expressed “20% of respondents feel...,” it means that the results could range from 14% to 26%—
a 12% swing. However, there is greatest likelihood that 20% is correct, and much less likelihood that 14% or 26% is correct. In a sophisticated social science survey, and in some benchmark polling, adjustments are made in the sample to enable it to accurately reflect known patterns in the demographics of the electorate. Age, gender, race, and area of residence adjustments can me made to reflect overall census or likely voter patterns.

When professional surveyors ask questions and offer multiple choices for responses, they present the order of the choices randomly, with a different order from call to call, so that no pattern of preference develops for the first or last option. This is because the first choice available is somewhat favored by respondents and because people may remember most clearly the last option they hear. Question design and sequence are also crucial. Sequences can create expectations and predispositions to answer in a particular manner. For example, to first ask “Does George W. Bush’s alleged drug use disqualify him to serve as president?” is likely to bias responses to a follow-up question asking which presidential candidate is best qualified to direct the nation’s war on drugs.

 

The Polls

The Benchmark Poll

A benchmark poll is taken at the beginning of a campaign to provide a baseline against which the campaign’s effectiveness is measured as its strategy and tactics unfold. The information in this survey will help the campaign managers design the themes, messages, and media and mail solicitations of the campaign. Samples used in subsequent polls will be based on the information about the likely electorate unearthed in this first comprehensive survey.

A benchmark survey takes anywhere from fifteen minutes to half an hour to conduct by phone. Many interviews are conducted in order to increase the reliability of the data. In the benchmark survey, campaigns want to know the likely voters’ demographic characteristics. The sample needs to include enough respondents in each demographic category to validate the predictive value of the data for each subgroup. In a congressional or senate campaign, benchmark surveys are usually conducted about six months ahead of the primary.9

Voter familiarity with the candidate is one of the most important measurements taken in a benchmark poll. Do voters know who the candidate is? Do they view the candidate, and his or her opponent, favorably or not? What characteristics of the candidate resonate with likely voters? What other elected officials in the area or nation do voters view favorably? By knowing which politicians voters like, campaign managers can seek helpful endorsements for their candidate. On the negative side, they may try to associate the opponent with the least popular members of the other party.

Positive name recognition is one of the most crucial factors in campaign fundraising. Benchmark polls are used by candidates hoping to challenge an incumbent or a front-runner to increase campaign contributions. If a Democratic contributor is deciding whether to send his check to Al Gore or Bill Bradley, one of the factors he is most likely to consider is which candidate can best challenge the current Republican frontrunner—George W. Bush. If Bradley’s positive name recognition is increasing faster than Gore’s, then Bradley just may get the check.

In a benchmark poll, the likely voters will also be asked detailed questions about the range of possible issues that the campaign might raise. The effectiveness of the candidate’s preferred platform can be tested, as can that of the opponent’s anticipated platform. As Dick Morris observed, the goal is not for a candidate to choose what to stand for, but for the campaign to find out which issues strike a strong chord with voters. Often campaigns are looking for “sleeper issues,” meaning issues which now lack active controversy, but where a large majority of people feel strongly in favor of one side and might be mobilized in the candidate’s support.

Within two months of a primary, some campaigns will do quick surveys to test themes and messages they anticipate their opponents will use. These ad hoc or “just checking” types of polls can help the campaign anticipate what it must be ready to react to.

Tracking and Brushfire Polls

Tracking and brushfire polls are quick surveys done in the last few weeks of a campaign. The message of the campaign has been framed and the advertising is on TV or radio or in mailboxes. Campaigns need to know how likely voters are reacting to their messages. Tracking polls—typically, brief five-minute interviews with small numbers of voters conducted at least weekly—give the candidate almost instant reactions to the messages she and her opponent are delivering.10 Brushfire polls are used to identify a decline in support among any particular demographic or issue group. If a hemorrhage of support is occurring, the campaign can scramble to deliver a targeted message to staunch the bleeding.

In Illinois in 1998, a Republican candidate for secretary of state ran a series of television spots showing the candidate with the popular incumbent governor. Tracking polls that followed the series of advertisements found that 90% of the targeted voters had seen the ads, but that a surprising number of likely voters did not recognize the governor. The campaign ran the spots again, but this time with a banner that identified both men.11

In a close race in the last week before the election, polling will be conducted nightly to determine where the message of the campaign needs the most support. In a congressional or senate campaign, managers want to know which geographic areas they need to “blitz” in order to get information to the doors of likely voters.

 

Push Polls

One campaign “polling” technique sometimes used in the last few days or hours before an election, but considered unethical by the American Association for Public Opinion Research (AAPOR), is the push poll.12 A “push” call might go like this: “Hello, Mr. Doe, are you aware that Ms. Evil is in favor of cannibalism?” Or, the telemarketer’s spiel might be a favorable one: “Hello, Ms. Roe, how do you feel about the recent canonization of Mr. Great?” A push poll is not designed to ascertain public opinion. Rather, it is an effort to plant information. Sophisticated push polling may be embedded in a
telephone interview that actually seeks data in a legitimate scientific fashion. Push polling is usually done “in house” or is sponsored by an interest group not directly associated with the campaign. Professional pollsters eschew the practice.

 

Exit Polls

Exit polling, and the “what-really-happened?” polls, are the province of public and academic surveyors of public opinion. Candidates are likely to scan this data with an eye toward repeating a successful strategy or to avoid making the same mistake twice. Many pundits criticize the media practice of publicizing exit polls on election day, using them to extrapolate overall results, and then projecting winners before the polls have closed. Some argue that this practice dissuades people who have not yet voted from voting at all, because they may feel the contest that most interested them has already been decided.

In the early 1980s, the state of Washington passed a law that regulated exit polling. Legislators in that state were irritated that national exit polls taken by TV networks had led them to announce as early as 3:00 p.m. (Pacific time) that Ronald Reagan had defeated Jimmy Carter. Statewide candidates felt voters might have been discouraged from going to the polls since the presidential race had already been “decided.” While they didn’t think the announcement had made any difference in the presidential race, they hypothesized that the outcome of local races might have been affected.

 

Polling in Election 2000

The Primaries

In the fall and winter of 1999-2000, presidential candidates will limit their use of polling to statewide surveys in the early primary states of Iowa, New Hampshire, and South Carolina. Primary battles are mainly conducted on television, by direct mail, and through free media appearances. Candidates in the primaries will do tracking polls to measure the effectiveness of their advertising and appearances. Their polls will also attempt to identify strong and weak areas of support geographically.

Few primary candidates will spend much of their precious war chests on polls used to select and shape their message. Candidates, expecially those who are challenging an incumbent or front-runner, know their themes and are not likely to alter them much. But they do need to know whether their overall message is being heard.13 Since only two viable candidates will survive past the end of March, none of the primary candidates will conduct nationwide surveys. The expense is just too great.

However, candidates may spend money on polling in order to “block” another campaign. Candidates with a lot of money will sometimes buy polls in primary states where they want to obstruct their opponents’ campaigns. In 1996, Bob Dole hired three or four pollsters to survey some states in order to monopolize local firms and make it more difficult for his opponents to survey.14 In localities where public polling is high quality and frequent—New York, for example—campaigns may conduct fewer polls. They can track how things are going by reading media polls. While good campaigns are suspicious of public polls—the samples are often too random to do a good job of identifying likely voters—Hillary Clinton and Rudy Guiliani will be able to track their progress using the frequent and generally high quality polling of the many New York media outlets.

 

The General Election

Once nominated, the Republican and Democrat nominees will each hire a national polling consultant while continuing to use many statewide pollsters. The benchmark polls in the presidential race will be conducted in the summer of 2000. Following this, each campaign will conduct a half dozen state-by-state surveys in August and September to track the effectiveness of its candidate’s message and likely voters’ reactions to paid media and campaign events.

By October, both campaigns will conduct tracking polls every night. These daily polls will measure the effectiveness of paid media, perceptions of the candidate’s image, and the sometimes vacillating inclinations and motivations of likely voters. Though the nightly sample is small, and the margin of error large, the daily polls are compounded into multiple-day sets which have greater accuracy. These polls track voters’ immediate reactions to events and show the campaign the trajectory of a trend. Campaigns can then fine tune and target their message in an attempt to micro-manage voter behavior.15

 

Notes

1. Dick Morris, Behind the Oval (New York: Random House, 1997).

2. Campaigns and Elections, online edition [www.campaignline.com], accessed 8/3/99.

3. Ernie Tani, political pollster for National Opinion Research Center, University of Chicago, e-mail letter 8/4/99.

4. For example, Glen Bolger’s firm, Public Opinion Strategies, polled for 50 House and 5 Senate candidates in 1998, while Geoffrey Garin, Peter Hart, and Fred Yang together surveyed for 15 Senators and 14 Congresspersons. See Major Garrett, “Welcome to the Pro Bowl,” US News, online edition [www.usnews.com/usnews/issue/981102/2pro.htm], accessed 11/2/98; National Foundation for Women Legislators website
[www.womenlegislators.org].

5. Ben Knuth, political pollster for Zogby International, a non-partisan polling firm [www.zogby.com], e-mail letter 8/10/99; Rod McCollough, campaign consultant and pollster, interview with author, 8/3/99; Geoff Becker, poltical pollster forThe Tarrance Group, e-mail letter 8/3/99.

6. McCollough.

7. Becker.

8. “Pollsters Recall Lessons of ‘Dewey Defeats Truman’ Fiasco,” no author cited, CNN Interactive [http://europe.cnn.com/ALLPOLITICS/
stories/1998/10/18/pollsters.mistake.ap/], accessed 10/18/98.

9. McCollough; Becker.

10. McCollough; Becker.

11. McCollough.

12. American Association for Public Opinion Research website, ethics page [www.aapor.org/ethics].

13. Becker.

14. McCollough.

15. McCollough; Becker.

 

References

Pobst, Kevin.. “A Guide to Polling,” in AT&T’s Voices That Count. New York: AT&T, 1992) , 8-15.

 

Kevin Pobst teaches social studies at Downers Grove South High School in Downers Grove, Illinois. He has cnducted many polls with his AP American Government and Government classes.

 

Teaching Activities:

Citizenship Skills for Students

 

Polling education should be a part of social studies education. Students need to understand how polling is done and how it is used by the media and political campaigns. In a world awash in data, good citizens must be able to distinguish good data from bad data and do their best to make sure that data is used responsibly. In that political news-reporting is driven by polls, good citizens must be critical consumers of such reporting and of the data on which such reporting is based. The following are some recommended activities for students.

 

Learn to “Read the Box”

One key skill for consumers of media polls is to learn to “read the box.” Poll data presented on television almost never includes an explanation of the methodology. But nearly all national media polling organizations, e.g., Time/CNN, involve both print and electronic media. The print version of the poll, and its accompanying analysis, will almost always include an explanation of the method by which the data was gathered. This information is often presented in a “box.” Non-profit and academic polling centers do the best job of explaining their methods. For example, the Pew Research Center for People and the Press provides excellent “About This Survey” explanations.

In considering the results of a poll, responsible survey consumers should ask:

> When was the survey conducted? Is there any reason that the particular days selected would produce a skew or bias in the results?

> How was the sample selected or adjusted? Are there any reasons why this particular survey might be less reliable if the sample were not adjusted? Who was screened out of the sample? For example, for political campaigns, how were likely voters identified?

> What was the wording of the questions and in what sequence were they asked? Were questions about the same issue asked in varying ways to double-check the consistencies of a respondent’s answers within the survey? Does there seem to be any bias or ambiguity in the wording or any contamination of questions due to their proximity to other questions?

> Were the issues polled ones which respondents might consider highly personal or sensitive? The wording, options, and sequence of questions on delicate issues are more crucial than is the case with less personal questions.

> What does the sampling error statement say? Are you satisfied that the pollster’s conclusions are based on a sufficient sample size?

 

Analyze the Nature and Effects of Polling

As widespread as polling is in our political culture, in the eyes of many political pundits and average citizens, polling is controversial. Students who study polling should research and discuss these questions about the nature and effects of polling:

> Will leaders lose their ability to lead? Some critics of polling say that as politicians become dependent on surveying to plan campaigns and make judgments once in office, their ability to act from conviction, to take courageous stands, and to provide leadership will be eroded.

> Do polls promote superficial popularity contests in place of true issue debates? Some founding fathers, James Madison in particular, feared the irrational emotions of the masses and tried to insulate policymaking in our republican system from the winds of public opinion. Does constant polling subject government to the “tyranny of the masses.” In some polls, everyone’s view counts the same, whether it is informed or prejudiced, reflective or reactive. On the other hand, when polls are themselves insulated from the opinions of citizens who do not vote, do they present a fair view of the issues and needs of all the people?

> Do the media use polls to actually manage the news? The press may serve as both pollster and scorekeeper for political campaigns. Conducting a poll allows a newspaper or TV station to create “news” and then cover the very news it has created. Moreover, in covering the “horse race” of a campaign, are media distracted from covering the more important issues of substance in a campaign?

> Are exit polls harmful? Are some voters discouraged from voting by the early release of exit polls and, if so, how does this affect contests lower on the ballot?

> Do reporters and citizens accept poll results too willingly? Many analysts worry that political reporters naively and uncritically toss off survey results with the same abandon with which they offer their personal assessments of political issues. While Sam Donaldson’s opinions are taken by most viewers as subjective, many accept his citing of poll results as “fact.” Even columnists in major papers seldom differentiate between high-quality and poor-quality survey data.

> Do people lie to pollsters? Most pollsters say a majority of the public does not lie on surveys because there is no clear incentive to do so. But, as suggested above, some pollsters suspect that respondents are reluctant to give answers that they think may cast them in a bad light. This may be especially true when race is a factor. For example, in the New York City mayoral race of 1989 involving African American candidate David Dinkins, and in the Virginia gubernatorial race of the same year involving African American Douglas Wilder, there was evidence that voters were less likely than usual to be candid in reporting their choice as they risked the appearance of voting based on racial preference. The most glaring disparities between what people said and the actual results were revealed by exit polls, especially those conducted as face-to-face interviews rather than anonymous telephone questions. One hypothesis is that people who vote for a candidate on the basis of skin color feel shame or fear of being judged if they admit for whom they voted.

Conduct a Classroom Poll

One of the best ways to make concrete the values and drawbacks of polling is to have students conduct a poll. Designing and conducting a purposeful poll is genuine problem solving. Problem Based Learning activities call for students to perform activities such as determining what questions ought to be asked in order to gather the best data possible to solve a problem. Furthermore, the acquisition of data, the organization and use of information, and social participation—all skills promoted by classroom polling—are embedded in the NCSS thematic strands of social studies knowledge essential for effective citizenship. My experience in conducting polls with students over the last fifteen years leads me to recommend the following procedures.

 

Laying the Groundwork

Begin with an examination of the nature of polls and good principles of polling. Students might read the article on pp. 415-18 and talk about it in class. I ask students to work toward the fulfillment of three basic polling principles:

1. Generate the best sample possible

2. Create a set and sequence of questions with the least possible bias

3. Conduct the survey under conditions that minimize bias and maximize confidence in the results

Unless you teach under unique circumstances (large numbers of highly motivated students, a large bank of telephones, and sophisticated software to process your data), you’ll need to modify professional surveying techniques. Because this will compromise the preceding principles to some degree, make sure that students are aware of the consequences of these modifications.

While polls conducted by students among their peers or parents enable them to ask political questions, tabulate results, and formulate generalizations, it is more effective to take the polling process further by putting students in contact with the community and using techniques like those employed by professionals. Student engagement rises when the polling they are doing is purposeful and as scientific as possible.

Try to draw on community resources. Most areas have market-research firms with employees knowledgeable in surveying techniques. The social science departments in local colleges can yield good advisors as well. Some large newspapers have their own poll editor. We’ve had good luck contacting candidates and asking them to refer us to the pollsters used in their campaigns.

Discuss what other factors might affect the results of a poll using the questions of Who? When? and How? Who? Does it matter how the people questioned are selected—the source of names or numbers, the area of residence, race, gender, age? When? Does it matter when the responses are solicited—which day of the week, which hours of the day? How? Does it matter how the responses are solicited—by phone, on the street, door to door? Discuss the concepts of sample and randomness. How can random, representative samples best be selected?

 

Selecting a Topic

When brainstorming for ideas, aim for topics with high student and community interest to increase the response rate. If possible, conduct your poll in cooperation with the school or community newspaper. Publicizing the results and inviting feedback makes the students more accountable and the situation more realistic. For best results, encourage students to choose an issue they know their parents are likely to be highly interested in. Often a local issue—a school tax referendum or a drug-testing program for high school athletes—will generate a strong response. Avoid issues that respondents may perceive to be too controversial or inappropriate for high school students to ask about. Your goal is not to intrude on people’s privacy but to solicit willing responses.

In general, the more narrowly focused your inquiry, the more meaningful the results. Broad questions may be promising as they are composed, but the data produced may be vague and difficult to interpret. Review the questions created by professional pollsters. Ask students how and why these questions might have been narrowed from broader questions.

 

Choosing the Sample

For a class phone survey, it may not be practical to attempt completely random generation of phone numbers (though some of your students may be able to do this on their computers). But here’s a way to get close. Take a local phone book and devise a scheme for tearing out pages to provide the number needed by students working in pairs. Ask students to count off a set number of names on their page, say every seventh listing, and to highlight those names. Eliminate businesses, as you want to survey only households. While the degree of randomness you achieve with this method may not equal that of a professional pollster, your efforts will encourage students to take the task seriously. The more unbiased the sample, the better your results.

Preparing the Script

Students are more comfortable and more uniform in their solicitation of responses if they use a script. And scripts are a realistic part of telephone polling. Design a questionnaire with a question sequence that takes no more than three to four minutes, because some adults will not be eager to answer questions from a teenager for much longer. Do not attempt to disguise the fact that this poll is being conducted by teenagers; we’ve tried, it doesn’t work and, interestingly, we’ve had lower refusal rates when we owned up to it right away.

The script should include:

> a standard introduction of caller to respondent, stating the intention of the survey, who is conducting it, how many minutes it will take, and a promise of anonymity and confidentiality

> screening questions to eliminate respondents who are not to be included in your sample. You may want to limit your sample to adults or to likely voters or perhaps to women. To those who do not qualify for the sample, script a polite “Thank you and good night.”

> the actual opinion questions designed to generate the responses you want

> a few demographic questions as appropriate. This will enable your class to cross-tabulate results by gender, by age, by neighborhood, or by race. Be cautious. In one community where we surveyed, citizens were extremely sensitive to questions about their race. In another community, a few years later, we picked up very little sensitivity. You are the best judge of your community. If demographic questions would offend your respondents, then don’t ask. This information is useful but not necessary.

Prepare a large quantity of response sheets. Each successful contact should be recorded on a separate sheet to enable students to sort and cross-tabulate results. Of course, if you or your students have a cross-tabulation program, you won’t need to tabulate by making piles, the way we do!

 

Conducting the Survey

Have students determine a reasonable number of households to poll on a given evening. Working in pairs, one student can pick random phone numbers, while the other student dials, reads the questions, and codes the responses. Ten pairs of students with a two-to-three-minute scripted questionnaire can make 100 to 120 successful contacts in one to two hours. If you can, conduct the poll over two or three evenings, and try to call back households that do not answer.

If your school has several phones students can use in the evening, your task will be simplified (on one occasion a local bank allowed us to use their office phones for two evenings). Perhaps a local business would lend its resources. If a group of phones is not available, students can call from home. While this can introduce bias—some students may follow a less disciplined pattern –most will take the task seriously and complete it with care.

Warn students ahead of time not to be discouraged if they get no response or no cooperation from as many as eight out of every ten numbers called. This is not unusual. Students may initially be intimidated, but most will warm to it after a few calls. Students will discover many things that will cause them to think about how difficult it is to survey public opinion accurately. They will be frustrated by answering machines. They will be surprised by the number of respondents for whom English is not the primary language. Elderly people may pose a challenge to their ability to speak clearly. Testy adults will challenge their confidence.

Telephoning is superior to surveying on the street or door to door. Street surveys are not random and may be intimidating or unsafe for students. Surveying door to door can be randomized but with great difficulty. It is also painfully slow and laborious. If phone polling can’t be arranged and students survey by other means, develop a strategy for randomizing the addresses surveyed or the people approached.

 

Compiling and Analyzing the Results

Tabulate the results of the poll in both absolute numbers and percentages. Asking students to analyze both absolutes and percentages can reveal the varying impact of numbers displayed in different ways. If you are able to get demographic data in your survey, or if you have multiple questions, have students cross-tabulate demographic data with opinion responses. Because you have one sheet per respondent, you can use the low tech method of cross-tabulating by hand. In either case, normal tabulation or cross tabulation, encourage students to make generalizations based on the data.

Ask students to review the survey questionnaire and methods in light of their experiences conducting the poll. What do they think is the probable validity of the generalizations derived from the poll? Do they think any biases might have been produced by the methodology used? If you have polled on a question of concern to leaders of the community or the school administration, for example, have students deliver their poll results to affected adults. Getting feedback from adults enmeshed in an issue puts polling in a true public policy-making context.

Websites

ABC News Poll

www.abcnews.go.com

 

Alliance for Better Campaigns

www.bettercampaigns.org

 

American Association for Public Opinion Research

www.aapor.org

 

Campaigns and Elections magazine

www.campaignline.com

 

CBS/New York Times News Poll

www.cbs.com, www.nytimes.com

 

Center for Public Interest Polling/Eagleton Poll (Rutgers)

www.rci.rutgers.edu/~eagleton

 

Center on Policy Attitudes

www.policyattitudes.org

Committee for the Study of the American Electorate (Curtis Gans)

tap.epn.org/csae

 

Fox News/Opinion Dynamics

www.opiniondynamics.com

 

The Gallup Organization

www.gallup.com

 

Harris Poll

www.harrispollonline.com

 

LA Times Poll

www.latimes.com/home/news/polls

 

Market Strategies

www.marketstrategies.com

 

 

National Council on Public Polls (ethics and public education organization)

www.ncpp.org

 

National Network of State Polls

www.irss.unc.edu/depts/nnsp

 

Pew Research Center for the People and the Press

www.people-press.org

 

Polling Report.com (an online journal of public opinion polling)

www.pollingreport.com

 

Public Opinion Strategies (Glen Bolger)

www.pos.org

 

Roper Center for Public Opinion Research (Connecticut)

www.ropercenter.uconn.edu

 

The Tarrance Group (Ed Goeas)

www.tarrance.com

 

USA Today/CNN/Gallup

www.usatoday.com

 

Washington Post.com (Richard Morin)

www.washingtonpost.com

 

Zogby International (John Zogby)

www.zogby.com

©1999 National Council for the Social Studies. All rights reserved.