Remember that time you asked me how to increase the credibility of public polling in Turkey? No? Well, it turns out I have thoughts on the matter. Here they are.

Transparency, Transparency, Transparency:  This is the single most important factor. Given the amount of flawed data out there, every pollster who releases election polls publicly should voluntarily provide the following information for the sake of increasing public trust in the science. Reporters should ask for it. Not all of it needs to be reported by the media — it typically isn’t — but it provides important information, especially to professionals and academics, about how data were collected and processed. Allowing outsiders to review and discuss the methodology increases the rigor of the research. Ultimately, the result is greater public confidence in polling data.

Here’s the type of information that would be helpful. (The first four bullets should be reported in every media story that references a poll, without exception)

  • Sample size, sample type and universe: Here’s an example. “A national (or urban, or regional sample of 25 provinces) sample of n=2000 adults in Turkey over age 18.” If the pollster diminished the sample to include only likely voters, he or she should explain how that determination was made.
  • Fieldwork Dates: Knowing when the data was collected provides important context about events that occurred in the political environment and could affect perceptions of the candidates (i.e., a deadly mine disaster, or a huge corruption scandal). Fielding dates also tell you if there was enough time to return to selected respondents who weren’t available on the first try. With a large sample, a day or two in the field isn’t enough for callbacks. Therefore, the data are biased toward those who answer their phones or their doors on the first try.
  • Margin of error for the sample as a whole: “The margin of error for the n=2000 sample is 2.19% at the 95% level of confidence. The margin of error for demographic and geographic subgroups varies and is higher.”
  • Who’s the Funder: This is critical information. Who’s paying for a survey may impact the credibility of the data. It may not. But you have no way of judging if you don’t know who’s coughing up the dough. In the US, few pollsters would jeopardize their reputation for accuracy and reliability by swinging data in favor of a well-funded or powerful interest (some would and have, but it’s an exception, not a rule) but revealing who’s paying for the research is standard. Even if Turkish pollsters don’t monkey with the numbers (and lots don’t) the perception that pre-election polling is cooked is well-founded, pernicious and must be addressed if opinion research is going to be used as a credible tool for illuminating public policy debates and elections.
  • How the interviews were conducted and how respondents were selected: Were interviews conducted using face-to-face interviews? If so, how were respondents selected? Were the interviews conducted by telephone? What proportion of landlines versus mobile numbers was used? How many efforts were made to call selected numbers back if there was no answer? What times of day were interviews conducted?  If the answer is “online poll,” step away from the story.
  • Response rates: What percentage of selected respondents participated in the survey? This varies depending on the country and sometimes, the type of survey. The pollster should reveal what standard response rates are in Turkey for similar surveys. An abnormally high or low response rate should raise red flags.
  • Question wording and order: How a question is asked and where it appears in a survey directly affect responses. Respondents should not be “primed” to answer a particular way. Therefore, a vote preference question should be one of the first respondents are asked. The list of candidates should be presented exactly as names appear on the ballot, with no extraneous information provided that voters won’t see when they enter the polling station. The percentage of respondents who answered “don’t know” or “undecided” (a critical data point in election polling) should also be reported and if the “don’t know” response was prompted or unprompted.
  • Quality Control: How many interviews were verified in the field by supervisors or called back to make sure they really took the survey?  I know it’s hard to believe but sometimes interviewers are lazy and fake interviews! Quality control is expensive, technical and time consuming and is why methodologically sound polling is expensive. Rigorous quality control by outsiders reduces the chances that data are falsified, especially in the processing phase where someone *might* be tempted to place a finger on the scale. Opening data sets to outside scrutiny is a good way to expose and prevent this.

 

  • Sampling and weighting procedures: It’s easy to baffle non-specialists with statistics but polling isn’t rocket science and random sampling procedures are guided by industry standards. Pollsters should reveal if their samples are stratified and by what variables. They should share how sampling points were selected. They should also reveal if the final data were weighted and by what factors.

 

Wow! This sounds like a lot of work! But one of the most interesting outcomes of the 2012 election in the US, in which a high profile, well-respected research outfit (Gallup, in about as epic a scandal as pollsters are allowed to have) got the election wrong, was the degree of public scrutiny Gallup allowed of its methodology to figure out what happened. I’m sure it was painful — no one likes to admit they did things wrong — but the result is better, and more credible, public research. Gallup’s reputation took a hard hit, but they dealt with it the best way they could. If you really want to learn more about what happened to Gallup — and why wouldn’t you? Pollsters are awesome — read this report.

 

Given that major Turkish pollsters, including a well-respected one, got the Presidential election wrong, this issue isn’t going away soon. Historic low turnout figures –preliminarily 74% — might have thrown some pollsters for a loop but it shouldn’t have, given the timing and the dynamics of the election. The challenge always, as US pollsters have found, is trying to predict which voters will cast ballots and which will stay home. Turkish pollsters, who already face credibility issues, need to confront this issue with transparency.

**Quirk Global Strategies isn’t in the business of public polling. We’re strategic pollsters, which means private clients use our data to guide their internal political or communications strategies (though not in Turkey, usually). This is an important distinction.