“The Most Important Election in 2023”: Superforecasting the Vote in Turkey

“The Most Important Election in 2023”: Superforecasting the Vote in Turkey

 

The Superforecasters’ forecast on the outcome of the 2023 presidential election in Turkey as published in The Economist.

When The Economist’s “The World Ahead” issue was being prepared for publication in October 2022, Good Judgment’s professional Superforecasters were assigning a 71% probability to President Recep Tayyip Erdogan’s victory in the 2023 Turkish presidential election. Since then, we witnessed Erdogan’s increasingly unorthodox fiscal policy that resulted in high inflation and a devastating earthquake that killed more than 50,000 people. These developments led many in the West to start betting on the candidate from Turkey’s united opposition, Kemal Kilicdaroglu. Superforecasters, however, stayed the course. With a one-day dip to 50-50 just ahead of the first round, they otherwise remained consistently at 60–70% for Erdogan throughout the seven months of this question period, and they are currently at 98%.

For Foreign Policy Expert and Superforecaster Dr. Balkan Devlen, this question was close to home. Born in Izmir, Turkey, Dr. Devlen felt many in the media were getting carried away by the narrative of an underdog win. In this interview, he discusses the key drivers behind his forecast, factors that Western commentators failed to take sufficiently into account, and his assessment of what lies in store for Turkey—and the region—after the election.

GJ: The Superforecasters started at 71% for Erdogan. They are now at 98%. How has your forecast evolved over this period? What were the key drivers?

BD: I was consistently around 80–85% throughout the question period. In that sense, my forecast didn’t evolve much, primarily because the key drivers of my forecast didn’t really change.

Number one is the makeup of the electorate in Turkey, which consistently suggests that about 40% of the voters support the more conservative nationalist coalition led by Erdogan. And that didn’t change much over the period, despite difficulties when it comes to economics or other issues.

Number two is the fact that Erdogan has been in power over 20 years, and during that period managed to install his own cadres across the state and bureaucracy, as well as to gain control over almost all of the media. This created an information environment in which the opposition had a hard time reaching voters who were not already anti-Erdogan.

Lastly, Erdogan does not have the luxury of losing. The extreme polarization within Turkish society and Erdogan’s increasing authoritarianism, especially in the last 10–12 years, means that retiring after the election is just not an option for him, and his family and cronies have also been implicated throughout his rule in corruption, oppression of free speech and freedom of media, etc. Taking those three drivers together, I did not see the wherewithal that the opposition can come together and push out a win.

GJ: Several polls and many commentators in the West were predicting that the opposition would win. Did you find their arguments convincing?

BD: I did not find the polling or the Western commentary particularly convincing, primarily because they tend to base their arguments as if the elections were taking place without the context that I’ve just discussed. I also think that some were engaging in motivated reasoning and wishful thinking. One argument is that the impact of the earthquake in Turkey could have shifted the balance, but that again is a misunderstanding of the fundamental dynamics in the region.

GJ: What would have made you change your forecast?

BD: Perhaps two things. One, if there had been another candidate, either Istanbul Mayor Ekrem Imamoglu or Ankara Mayor Mansur Yavas, the opposition would have had a much better chance.

Two, if I were to see any high-level defections from the AKP or the close circle of Erdogan prior to the election, that would have suggested that, at least, there is a fracture within the ruling elite, that they were considering their post-election fates, and therefore they were breaking rank and jumping ship. We didn’t see any of that, so I didn’t see any reason to change my forecast.

GJ: Superforecasters as a group are now at 98% probability for Erdogan’s victory. Is there a chance they’re wrong?

BD: Of course, there is a chance that the Superforecasters could be wrong, and I am at 98% myself. There are always black swan events, despite the fact that there are only a few days left before the second round. But I don’t see any particular dynamic today that would suggest that the fundamentals prior to the first round were altered in any meaningful way.

Incidentally, I was expecting that this would go to the second round, partly because of the third-party candidate, Sinan Ogan, who now declared his support for Erdogan. That base will probably break 2-1, at least, for Erdogan. But the implication is that if Ogan weren’t in the running, Erdogan probably would have won in the first round.

GJ: What’s in store for Turkey now and for the region?

BD: Well, that’s a big question and I don’t think that’s enough space to go into a detailed examination. But I can tell you that the region and Turkey will probably need to accept the fact that Erdogan will be in power as long as he’s alive. And actors in Europe, in the Middle East, in the Caucasus, and elsewhere would need to adjust to that particular fact.

For Turkey, the results would not lead to a more democratic system. There are those in the parliament now, as part of Erdogan’s coalition, who are hardcore Islamists, and who are calling for much more Islamist policies, for example.

The election results will also lead to recriminations among the opposition coalition, as the only thing that really unites them is their opposition to Erdogan. Therefore, one can expect turmoil within the opposition parties in the post-election era.

Erdogan will probably consolidate his power. That might provide some stability in terms of regional policies now that the need to play for domestic audiences has decreased. Therefore, we are likely to see a more predictable foreign policy attitude from Turkey. I do not necessarily see, though, much change in the direction in which Erdogan wants to take the country, both domestically and internationally.

I don’t see Sweden’s membership being approved by the Turkish parliament before the end of summer as it’s just not a priority, although timing is very hard to predict in these cases.

But like I said, a proper answer to this question requires much more detail and a much longer exposition than can be provided in a short interview. One thing is quite clear, though, and that is Erdogan will be in power for the foreseeable future.

How Distinct Is a “Distinct Possibility”?

How Distinct Is a “Distinct Possibility”?
Vague Verbiage in Forecasting

“What does a ‘fair chance’ mean?”

It is a question posed to a diverse group of professionals—financial advisers, political analysts, investors, journalists—during one of Good Judgment Inc’s virtual workshops. The participants have joined the session from North America, the EU, and the Middle East. They are about to get intensive hands-on training to become better forecasters. Good Judgment’s Senior Vice President Marc Koehler, a Superforecaster and former diplomat, leads the workshop. He takes the participants back to 1961. The young President John F. Kennedy asks his Joint Chiefs of Staff whether a CIA plan to topple the Castro government in Cuba would be successful. They tell the president the plan has a “fair chance” of success.

The workshop participants are now asked to enter a value between 0 and 100—what do they think is the probability of success of a “fair chance”?

When they compare their numbers, the results are striking. Their answers range from 15% to 75% with the median value of 60%.

Figure 1. Meanings behind vague verbiage according to a Good Judgment poll. Source: Good Judgment.

The story of the 1961 Bay of Pigs invasion is recounted in Good Judgment co-founder Philip Tetlock’s Superforecasting: The Art and Science of Prediction (co-authored with Dan Gardner). The advisor who wrote the words “fair chance,” the story goes, later said what he had in mind was only a 25% chance of success. But like many of the participants in the Good Judgment workshop some 60 years later, President Kennedy took the phrase to imply a more positive assessment of success. By using vague verbiage instead of precise probabilities, the analysts failed to communicate their true evaluation to the president. The rest is history: The Bay of Pigs plan he approved ended in failure and loss of life.

Vague verbiage is pernicious in multiple ways.

1. Language is open to interpretations. Numbers are not.

According to research published in the Journal of Experimental Psychology, “maybe” ranges from 22% to 89%, meaning radically different things to different people under different circumstances. Survey research by Good Judgment shows the implied ranges for other vague terms, with “distinct possibility” ranging from 21% to 84%. Yet, “distinct possibility” was the phrase used by White House National Security Adviser Jake Sullivan on the eve of the Russian invasion in Ukraine.

Figure 2. How people interpret probabilistic words. Source: Andrew Mauboussin and Michael J. Mauboussin in Harvard Business Review.

Other researchers have found equally dramatic perceptions of probability that people attach to vague terms. In a survey of 1,700 respondents, Andrew Mauboussin and Michael J. Mauboussin found, for instance, that the probability range that most people attribute to an event with a “real possibility” of happening spans about 20% to 80%.

2. Language avoids accountability. Numbers embrace it.

Pundits and media personalities often use such words as “may” and “could” without even attempting to define them because these words give them infinite flexibility to claim credit when something happens (“I told you it could happen”) and to dodge blame when it does not (“I merely said it could happen”).

“I can confidently forecast that the Earth may be attacked by aliens tomorrow,” Tetlock writes. “And if it isn’t? I’m not wrong. Every ‘may’ is accompanied by an asterisk and the words ‘or may not’ are buried in the fine print.”

Those who use numbers, on the other hand, contribute to better decision-making.

“If you give me a precise number,” Koehler explains in the workshop, “I’ll know what you mean, you’ll know what you mean, and then the decision-maker will be able to decide whether or not to proceed with the plan.”

Tetlock agrees. “Vague expectations about indefinite futures are not helpful,” he writes. “Fuzzy thinking can never be proven wrong.”

If we are serious about making informed decisions about the future, we need to stop hiding behind hedge words of dubious value.

3. Language can’t provide feedback to demonstrate a track record. Numbers can.

In some fields, the transition away from vague verbiage is already happening. In sports, coaches use probability to understand the strengths and weaknesses of a particular team or player. In weather forecasting, the standard is to use numbers. We are much better informed by “30% chance of showers” than by “slight chance of showers.” Furthermore, since weather forecasters get ample feedback, they are exceptionally well calibrated: When they say there’s a 30% chance of showers, there will be showers three times out of ten—and no showers the other seven times. They are able to achieve that level of accuracy by using numbers—and we know what they mean by those numbers.

Another well-calibrated group of forecasters are the Superforecasters at Good Judgment Inc, an international team of highly accurate forecasters selected for their track record among hundreds and hundreds of others. When assessing questions about geopolitics or the economy, the Superforecasters use numeric probabilities that they update regularly, much like weather forecasters do. This involves mental discipline, Koehler says. When forecasters are forced to translate terms like “serious possibility” or “fair chance” into numbers, they have to think carefully about how they are thinking, to question their assumptions, and to seek out arguments that can prove them wrong. And their track record is available for all to see. All this leads to better informed and accurate forecasts that decision-makers can rely on.

 

Good Judgment Inc is the successor to the Good Judgment Project, which won a massive US government-sponsored geopolitical forecasting tournament and generated forecasts that were 30% more accurate than those produced by intelligence community analysts with access to classified information. The Superforecasters are still hard at work providing probabilistic forecasts along with detailed commentary and reporting to clients around the world. For more information on how you can access FutureFirst™, Good Judgment’s exclusive forecast monitoring tool, visit https://goodjudgment.com/services/futurefirst/.

The Future of Health and Beyond

The Future of Health and Beyond: The Economist features Good Judgment’s Superforecasts

This summer, Good Judgment Inc collaborated with The Economist for the newspaper’s annual collection of speculative scenarios, “What If.” The theme this year was the future of health. In preparing the issue, The Economist asked the Superforecasters to work on several hypothetical scenarios—from America’s opioid crisis to the possibility of the Nobel prize for medicine being awarded to an AI. “Each of these stories is fiction,” the editors wrote in the 3 July edition, “but grounded in historical fact, current speculation, and real science.”

This was unlike most of the work that Good Judgment Inc does for clients. Our Superforecasters typically forecast concrete outcomes on a relatively short time horizon to inform decision- and policymakers about the key issues that matter to them today. The Economist’s “What If” project instead focused on a more speculative, distant future. To address the newspaper’s imaginative scenarios without sacrificing the rigor that Good Judgment’s Superforecasters and clients have become accustomed to, our question generation team crafted a set of relevant, forecastable questions to pair with each topic.

As a result, The Economist’s “What if America tackled its opioid crisis? An imagined scenario from 2025” was paired with our Superforecast: “How many opioid overdoses resulting in death will occur in the US in 2026?”

What if biohackers injected themselves with mRNA? An imagined scenario from 2029” was paired with: “How many RNA vaccines and therapeutics for humans will be FDA-approved as of 2031?”

And “What if marmosets lived on the Moon? An imagined scenario from 2055” was paired with: “When will the first human have lived for 180 days on or under the surface of the moon?”

Superforecaster, Social Scientist, and Archaeologist of Tempe, Arizona, Karen Hagar participated in forecasting these “far into the future” questions because, she says, she likes challenges.

“These questions were different than standard forecasting questions which typically resolve a year into the future,” she explains. “Both types of questions have inherent challenges. The questions with shorter resolution require extreme accuracy. One must research and mentally aggregate all incoming information. This includes any possible Black Swan events, current geopolitical and any social developments that may change within the short time frame. The dynamics of predicting outcomes of questions 10-20 years into the future required the same skill, but possibly even more research.”

The most exciting aspects of the “What If” project for Karen included learning the degree to which science has advanced. “For example, uncovering the scientific data regarding CRISP.R technology and its application to Alzheimer’s research was amazing,” she says.

In making her forecasts for The Economist, she studied the questions from all angles and played devil’s advocate to challenge her colleagues’ thinking. This technique of red-teaming is frequently used by professional Superforecasters to confront groupthink and elicit more accurate predictions.

“What If” is only one of Good Judgment’s several collaborative projects with The Economist. The newspaper’s “World in 2021,” recurring annually since the “World in 2017” and looking to forecast key metrics for the year ahead, consisted of questions that had shorter time horizons and were of immediate importance to decision-makers.

Superforecaster and Show Producer JuliAnn Blam says she is particularly interested in forecasting questions that focus on economic issues and the “World in 2021” project “didn’t disappoint.”

“The questions tended to be more pertinent to everyday life and issues that were of practical interest to me,” JuliAnn explains.

The “World in 2021” project included forecasting the world’s GDP growth, ESG (Environmental, Social, and Governance) investment, and work-from-home dynamics. But one of JuliAnn’s favorite questions was about racial diversity of board members in S&P 500 companies.

A screenshot of Good Judgment’s forecast monitor, FutureFirst, featuring the racial diversity forecast for The Economist’s “World in 2021” project.

“That one was hopeful, ‘woke’, and had me looking more closely at what a diversified board of directors can bring to a company’s outlook, marketing, product line, treatment of employees, etc.,” JuliAnn says. “It was a sort of stepping stone to looking into a lot more than just how many companies will appoint board members of color within the next year, and pushed the argument of why they should and what they would gain by doing so.”

Despite having a shorter time span than the “What If” forecasts, the “World in 2021” also required taking into account numerous factors, some of which weren’t even on the horizon when the questions were launched in October 2020. Take, for instance, the global GDP question.

“There are so many factors to consider, between Xi and Evergrande and the resultant fallout of the cascade from that default, to new COVID variants stopping workforces, anti-vax movements, the infrastructure bill and the green new deal, and then inflation,” JuliAnn says. “Tons to balance and think about!”

Whether it’s a forecast of global GDP next year or a possibility of using the Moon as a base for space exploration in the following decade, the Superforecasters always apply their rigorous process and tested skills to provide thoughtful numeric forecasts on questions that matter. As for their reward, Karen puts it best: “The enjoyment from forecasting is honing and improving forecasting skill, acquiring new information, and interacting with intellectuals of the same knowledge base.”

You can find Good Judgment’s Superforecasts on the “What If” questions in The Economist’s print edition from 3 July 2021 or on their website, or ask us about a subscription to FutureFirst, Good Judgment’s forecast monitor, to view all our current forecasts from our team of professional Superforecasters.