Here we are again, weeks out from another Turkish election, arguing on Twitter and in bars about the pre-election polls on the Presidential race between the shouty guy and the bread guy. As much as we’d like to, we can’t really ignore this election, so wouldn’t it be great if someone explained how to tell if a poll in the paper is credible or not?

It’s your lucky day! Here are some basic, but important, concepts to understand before you write about, argue about, print, or tweet publicly released election polls (everywhere, too, not just Turkey).

 

  • How many people were interviewed? It amazes me how few press articles include this mandatory information. A nationally representative sample should include at least 800 randomly selected respondents, which has a margin of error (MOE) of 3.5% at the 95% level of confidence.* A larger sample size does not necessarily mean the survey is better (academics may argue otherwise, but their research goals are different), so don’t fall into that trap. For example, the margin of error for a n=2000 sample is 2.2% (compared to 3.5% for n=800). That’s not a big difference and won’t matter that much except in the closest elections. However, if the pollster is sharing data from smaller demographic or geographic subgroups within the national sample (men, women, Kurds or Istanbullus, for example), a larger sample size becomes more important. Remember, the MOE increases as the number of interviews decrease. If Istanbul makes up 19% of the country (and in a nationally representative sample, it will) in an n=800 sample, there will be only 152 interviews among Istanbullus, with a MOE of 8%. If the sample is n=2000, there will be 380 interviews (MOE 5%) among Istanbullus. I’m slightly more comfortable with the latter data than the former because the margin of error is smaller. Do you like to play around with sample sizes? I do! There’s an app for that.
  • Who paid for it?  This is Turkey so this is probably the single most important question. In the US, major media outlets (and think tanks) commission credible research firms to conduct election surveys (the “CNN/Washington Post poll,” for example), the results of which papers report as news. Given they are in the business of reporting things that are more or less true, they have a lot at stake by getting the numbers right. The media in Turkey operate according to different principles. That a media outlet reports data tells us little more than in whose favor the numbers are likely to have been cooked. Methodologically sound research is expensive in Turkey — $20,000 to $30,000 for data collection alone — and for-profit research firms are unlikely to undertake survey work for fun, even if they say they do. Someone’s paying for it and if you can’t find out who, don’t report it.

 

  • Who was interviewed? Election polls are designed to predict election outcomes. It sounds harsh, but non-voters’ opinions don’t matter. Therefore, only likely voters should be polled. Because voting is compulsory in Turkey, election participation is very high (88%-90%) so nearly all adults are eligible to participate in an election survey. In contrast, election polling in the US is extremely complicated: only about 50% of Americans are eligible to vote (by virtue of having registered), and among those, participation rates vary from the extremely low (15% in low-interest primaries) to the less low (about 65% in presidential elections). Predicting who should be included in a sample of likely voters is extremely challenging. Misreading the composition of the electorate was one of the reasons major polling firms got the US election in 2012 wrong. Because of its timing (10 August, mid-vacation), uniqueness (it’s the first time Turkish voters have directly elected a president) and low interest in the candidates among the tatıl-class, Turkey’s presidential election presents a unique challenge to election pollsters. Is there going to be substantial drop-off in participation among certain types of voters who won’t bother to return to Istanbul from Bodrum’s beaches to vote? Maybe! Pollsters who care about accuracy will take this into account. They should explain how they’re addressing this issue, and how, if at all, they’re diminishing their samples to exclude those who won’t vote. Ask! Ask! Ask!

 

  • How did they conduct the interviews? Generally, in probability samples (the only kind that produces representative data and the only kind I will discuss), a respondent is selected at random to participate in either a face-to-face (F2F) or telephone interview. F2F has always been the norm in Turkey because of low phone penetration but that’s changing quickly as more and more people obtain mobile phones. Mobile sampling is becoming more and more common. Both methodologies have biases and you should know which methodology the pollster uses so you can be aware of them. I can go on for days about the pros and cons of each (it’s a wonder I have any friends at all). Online, web-only surveys are bogus. If you ever want to start a flame war with me on Twitter, report on an online survey like this one without using the word “worthless.”
  • What’s the polling firm’s track record? Accuracy is a pollster’s currency. The great thing about election polling is there’s a day of reckoning. You either get it right and can be smug (it’s science!) or you’re wrong and no one should listen to you anymore. Given the dearth of credible election polls in Turkey, calling previous election results correctly boosts a pollster’s credibility even more in my book. As far as I know, and I don’t know everything, one firm did that publicly in the March local elections: Konda. Why data released by firms that got a recent election completely wrong are treated as credible is a mystery to me. It’s easy to check.

 

This isn’t all there is, but it’s plenty and you don’t have to be a specialist to interpret it (as long as you understand probability sampling). Having the answers to these questions will make it easier to assess the quality of the polls you see in the Turkish press and on Twitter. Armed with this information, you’ll have the tools to be able to say “this poll sounds like BS. I’m not going to report/tweet it,” thus depriving bogus pollsters of the media oxygen they need to survive. If you can’t get answers to these questions, don’t report the data.

 

TOMORROW (or some day in the near future)! How to Make Public Election Polling in Turkey More Credible 

 

*If your universe (total number of potential respondents) is greater than a couple hundred people, the margin of error is the same for a random sample of n=800, regardless if you’re surveying a city with a population of 1500 people or a country of 78 million. If you don’t understand why this is, or what a margin of error is, get thee to a Stats 101 course and don’t start arguments you’re going to lose.

**Quirk Global Strategies isn’t in the business of public polling (or academic research). We’re strategic pollsters, which means private clients use our data to guide their internal political or communications strategies (though not in Turkey). This is an important distinction. Strategic pollsters who collect bogus numbers give bad advice, lose elections and don’t get hired again. Therefore, we strongly oppose BS numbers. You can be certain that strategic polling is being done in Turkey — most likely on behalf of AKP — but you and the twitter loudmouths you follow are unlikely to get your hands on it.