Superforecasting® the Fed: A Look Back over the Last 6 Months

Superforecasting® the Fed: A Look Back over the Last 6 Months

 

The Federal Reserve’s target range for the federal funds rate is the single most important driver in financial markets. Anticipating inflection points in the Fed’s policy has immense value, and Good Judgment’s Superforecasters have been beating the futures markets this year, signaling the Fed would continue to hike until the June pause while markets and experts alike flipflopped on their calls.

  • Ahead of the Fed’s March meeting, when Silicon Valley Bank went under, the futures markets priced out a hike and began flirting with a possibility of a cut as early as summer. Leading market observers like Goldman Sachs said the Fed would pause, and Nomura said they would start to cut at that meeting.

  • Then the futures markets priced in a pause for the May meeting. Experts like Pimco’s former chief economist Paul McCulley also prematurely predicted that the Fed would go on hold. As the date of the meeting approached, the futures markets—as well as most market participants—came to share the Superforecasters’ view that another hike was in the cards, but lagged behind the Superforecasters by nearly a month.

  • In the weeks heading into the June meeting, the futures were oscillating between a pause, a hike, and possibly even a cut. A stream of stronger economic data led experts such as Mohamed El-Erian to forecast that the Fed would continue to raise rates for at least another meeting and perhaps longer. Not the Superforecasters. They have been saying since 2 April 2023 that the Fed would most likely hit pause—a view that, once again, eventually became the consensus.

When comparing the forecasts of two groups—Good Judgment’s Superforecasters and the futures markets using the CME FedWatch Tool—for the last four Federal Reserve meetings, the Superforecasters assigned higher probabilities to the correct outcome. They were 66% more accurate than the futures (as measured by Brier scores) and had lower noise in their forecasts (as measured by standard deviation).

See our new whitepaper for details. We also provide subscribers with a full summary of all our active Fed forecasts, which is updated before and after each meeting (available on request). 

Good Judgment’s Superforecasters have been providing a clear signal on the Fed’s policy well before the futures and many market participants. Subscribers to FutureFirst™ have 24/7 access to evolving forecasts by the Superforecasters on questions that matter, including Fed policy through the rest of the year and beyond, along with a rich cross-section of other questions crowd-sourced directly from users, including questions on Ukraine, China, and the upcoming US elections.

“The Most Important Election in 2023”: Superforecasting the Vote in Turkey

“The Most Important Election in 2023”: Superforecasting the Vote in Turkey

 

The Superforecasters’ forecast on the outcome of the 2023 presidential election in Turkey as published in The Economist.

When The Economist’s “The World Ahead” issue was being prepared for publication in October 2022, Good Judgment’s professional Superforecasters were assigning a 71% probability to President Recep Tayyip Erdogan’s victory in the 2023 Turkish presidential election. Since then, we witnessed Erdogan’s increasingly unorthodox fiscal policy that resulted in high inflation and a devastating earthquake that killed more than 50,000 people. These developments led many in the West to start betting on the candidate from Turkey’s united opposition, Kemal Kilicdaroglu. Superforecasters, however, stayed the course. With a one-day dip to 50-50 just ahead of the first round, they otherwise remained consistently at 60–70% for Erdogan throughout the seven months of this question period, and they are currently at 98%.

For Foreign Policy Expert and Superforecaster Dr. Balkan Devlen, this question was close to home. Born in Izmir, Turkey, Dr. Devlen felt many in the media were getting carried away by the narrative of an underdog win. In this interview, he discusses the key drivers behind his forecast, factors that Western commentators failed to take sufficiently into account, and his assessment of what lies in store for Turkey—and the region—after the election.

GJ: The Superforecasters started at 71% for Erdogan. They are now at 98%. How has your forecast evolved over this period? What were the key drivers?

BD: I was consistently around 80–85% throughout the question period. In that sense, my forecast didn’t evolve much, primarily because the key drivers of my forecast didn’t really change.

Number one is the makeup of the electorate in Turkey, which consistently suggests that about 40% of the voters support the more conservative nationalist coalition led by Erdogan. And that didn’t change much over the period, despite difficulties when it comes to economics or other issues.

Number two is the fact that Erdogan has been in power over 20 years, and during that period managed to install his own cadres across the state and bureaucracy, as well as to gain control over almost all of the media. This created an information environment in which the opposition had a hard time reaching voters who were not already anti-Erdogan.

Lastly, Erdogan does not have the luxury of losing. The extreme polarization within Turkish society and Erdogan’s increasing authoritarianism, especially in the last 10–12 years, means that retiring after the election is just not an option for him, and his family and cronies have also been implicated throughout his rule in corruption, oppression of free speech and freedom of media, etc. Taking those three drivers together, I did not see the wherewithal that the opposition can come together and push out a win.

GJ: Several polls and many commentators in the West were predicting that the opposition would win. Did you find their arguments convincing?

BD: I did not find the polling or the Western commentary particularly convincing, primarily because they tend to base their arguments as if the elections were taking place without the context that I’ve just discussed. I also think that some were engaging in motivated reasoning and wishful thinking. One argument is that the impact of the earthquake in Turkey could have shifted the balance, but that again is a misunderstanding of the fundamental dynamics in the region.

GJ: What would have made you change your forecast?

BD: Perhaps two things. One, if there had been another candidate, either Istanbul Mayor Ekrem Imamoglu or Ankara Mayor Mansur Yavas, the opposition would have had a much better chance.

Two, if I were to see any high-level defections from the AKP or the close circle of Erdogan prior to the election, that would have suggested that, at least, there is a fracture within the ruling elite, that they were considering their post-election fates, and therefore they were breaking rank and jumping ship. We didn’t see any of that, so I didn’t see any reason to change my forecast.

GJ: Superforecasters as a group are now at 98% probability for Erdogan’s victory. Is there a chance they’re wrong?

BD: Of course, there is a chance that the Superforecasters could be wrong, and I am at 98% myself. There are always black swan events, despite the fact that there are only a few days left before the second round. But I don’t see any particular dynamic today that would suggest that the fundamentals prior to the first round were altered in any meaningful way.

Incidentally, I was expecting that this would go to the second round, partly because of the third-party candidate, Sinan Ogan, who now declared his support for Erdogan. That base will probably break 2-1, at least, for Erdogan. But the implication is that if Ogan weren’t in the running, Erdogan probably would have won in the first round.

GJ: What’s in store for Turkey now and for the region?

BD: Well, that’s a big question and I don’t think that’s enough space to go into a detailed examination. But I can tell you that the region and Turkey will probably need to accept the fact that Erdogan will be in power as long as he’s alive. And actors in Europe, in the Middle East, in the Caucasus, and elsewhere would need to adjust to that particular fact.

For Turkey, the results would not lead to a more democratic system. There are those in the parliament now, as part of Erdogan’s coalition, who are hardcore Islamists, and who are calling for much more Islamist policies, for example.

The election results will also lead to recriminations among the opposition coalition, as the only thing that really unites them is their opposition to Erdogan. Therefore, one can expect turmoil within the opposition parties in the post-election era.

Erdogan will probably consolidate his power. That might provide some stability in terms of regional policies now that the need to play for domestic audiences has decreased. Therefore, we are likely to see a more predictable foreign policy attitude from Turkey. I do not necessarily see, though, much change in the direction in which Erdogan wants to take the country, both domestically and internationally.

I don’t see Sweden’s membership being approved by the Turkish parliament before the end of summer as it’s just not a priority, although timing is very hard to predict in these cases.

But like I said, a proper answer to this question requires much more detail and a much longer exposition than can be provided in a short interview. One thing is quite clear, though, and that is Erdogan will be in power for the foreseeable future.

How Distinct Is a “Distinct Possibility”?

How Distinct Is a “Distinct Possibility”?
Vague Verbiage in Forecasting

“What does a ‘fair chance’ mean?”

It is a question posed to a diverse group of professionals—financial advisers, political analysts, investors, journalists—during one of Good Judgment Inc’s virtual workshops. The participants have joined the session from North America, the EU, and the Middle East. They are about to get intensive hands-on training to become better forecasters. Good Judgment’s Senior Vice President Marc Koehler, a Superforecaster and former diplomat, leads the workshop. He takes the participants back to 1961. The young President John F. Kennedy asks his Joint Chiefs of Staff whether a CIA plan to topple the Castro government in Cuba would be successful. They tell the president the plan has a “fair chance” of success.

The workshop participants are now asked to enter a value between 0 and 100—what do they think is the probability of success of a “fair chance”?

When they compare their numbers, the results are striking. Their answers range from 15% to 75% with the median value of 60%.

Figure 1. Meanings behind vague verbiage according to a Good Judgment poll. Source: Good Judgment.

The story of the 1961 Bay of Pigs invasion is recounted in Good Judgment co-founder Philip Tetlock’s Superforecasting: The Art and Science of Prediction (co-authored with Dan Gardner). The advisor who wrote the words “fair chance,” the story goes, later said what he had in mind was only a 25% chance of success. But like many of the participants in the Good Judgment workshop some 60 years later, President Kennedy took the phrase to imply a more positive assessment of success. By using vague verbiage instead of precise probabilities, the analysts failed to communicate their true evaluation to the president. The rest is history: The Bay of Pigs plan he approved ended in failure and loss of life.

Vague verbiage is pernicious in multiple ways.

1. Language is open to interpretations. Numbers are not.

According to research published in the Journal of Experimental Psychology, “maybe” ranges from 22% to 89%, meaning radically different things to different people under different circumstances. Survey research by Good Judgment shows the implied ranges for other vague terms, with “distinct possibility” ranging from 21% to 84%. Yet, “distinct possibility” was the phrase used by White House National Security Adviser Jake Sullivan on the eve of the Russian invasion in Ukraine.

Figure 2. How people interpret probabilistic words. Source: Andrew Mauboussin and Michael J. Mauboussin in Harvard Business Review.

Other researchers have found equally dramatic perceptions of probability that people attach to vague terms. In a survey of 1,700 respondents, Andrew Mauboussin and Michael J. Mauboussin found, for instance, that the probability range that most people attribute to an event with a “real possibility” of happening spans about 20% to 80%.

2. Language avoids accountability. Numbers embrace it.

Pundits and media personalities often use such words as “may” and “could” without even attempting to define them because these words give them infinite flexibility to claim credit when something happens (“I told you it could happen”) and to dodge blame when it does not (“I merely said it could happen”).

“I can confidently forecast that the Earth may be attacked by aliens tomorrow,” Tetlock writes. “And if it isn’t? I’m not wrong. Every ‘may’ is accompanied by an asterisk and the words ‘or may not’ are buried in the fine print.”

Those who use numbers, on the other hand, contribute to better decision-making.

“If you give me a precise number,” Koehler explains in the workshop, “I’ll know what you mean, you’ll know what you mean, and then the decision-maker will be able to decide whether or not to proceed with the plan.”

Tetlock agrees. “Vague expectations about indefinite futures are not helpful,” he writes. “Fuzzy thinking can never be proven wrong.”

If we are serious about making informed decisions about the future, we need to stop hiding behind hedge words of dubious value.

3. Language can’t provide feedback to demonstrate a track record. Numbers can.

In some fields, the transition away from vague verbiage is already happening. In sports, coaches use probability to understand the strengths and weaknesses of a particular team or player. In weather forecasting, the standard is to use numbers. We are much better informed by “30% chance of showers” than by “slight chance of showers.” Furthermore, since weather forecasters get ample feedback, they are exceptionally well calibrated: When they say there’s a 30% chance of showers, there will be showers three times out of ten—and no showers the other seven times. They are able to achieve that level of accuracy by using numbers—and we know what they mean by those numbers.

Another well-calibrated group of forecasters are the Superforecasters at Good Judgment Inc, an international team of highly accurate forecasters selected for their track record among hundreds and hundreds of others. When assessing questions about geopolitics or the economy, the Superforecasters use numeric probabilities that they update regularly, much like weather forecasters do. This involves mental discipline, Koehler says. When forecasters are forced to translate terms like “serious possibility” or “fair chance” into numbers, they have to think carefully about how they are thinking, to question their assumptions, and to seek out arguments that can prove them wrong. And their track record is available for all to see. All this leads to better informed and accurate forecasts that decision-makers can rely on.

 

Good Judgment Inc is the successor to the Good Judgment Project, which won a massive US government-sponsored geopolitical forecasting tournament and generated forecasts that were 30% more accurate than those produced by intelligence community analysts with access to classified information. The Superforecasters are still hard at work providing probabilistic forecasts along with detailed commentary and reporting to clients around the world. For more information on how you can access FutureFirst™, Good Judgment’s exclusive forecast monitoring tool, visit https://goodjudgment.com/services/futurefirst/.