About > Our Team > Superforecaster Profiles > Jean-Pierre Beugoms

GJ: In Think Again, Adam Grant reports that you started forecasting and tracking your forecasts against pundits back in the 1990s. Why did you start doing that?

J-P: As a boy, I remember being told a story about my maternal grandfather. He had been following the course of World War II for the government of Haiti and predicted that the Wehrmacht would quickly defeat the French and British forces in 1940. Those who did not take him seriously were surprised when it actually happened. So, maybe I started forecasting because of my grandfather. As a young man, I had a sense that I might be good at it. In 1991, I correctly predicted that General Norman Schwarzkopf would choose to employ a “left hook” against the Iraqi forces while the media were expecting an amphibious assault. I also correctly predicted the success of the NATO bombing campaign over Yugoslavia while pundits were saying that it was impossible to win a war from 10,000 feet.

What compelled me to start systematically tracking my forecasts against those of the pundits was the 1994 midterm election. Following the Republican takeover of Congress, nearly every pundit wrote off President Bill Clinton’s re-election chances. I wondered how they could be so confident. Then I found a book titled Keys to the White House by Allan Lichtman, a “do-it-yourself system” for predicting presidential elections without polls. Informed by Lichtman’s system, I concluded that Clinton was in a much stronger position than pundits realized.

Until that time, I had absolutely no interest in forecasting elections. Afterwards, I was hooked. I started keeping records of pundit forecasts, as well, as a way of showing that I was better than they were. Forecasting became a hobby.

GJ: You joined the Good Judgment Project back in 2011, which was its first year. How did you learn about GJP, and why did you decide to participate?

J-P: I learned about GJP from Nate Silver’s blog. I decided to participate because I thought it would be an excellent opportunity to prove to myself that forecasting geopolitics was indeed something that I was good at. Unfortunately, the tournament was already in progress when I signed up, so I was given the average scores on the questions that had closed before I joined. That meant that I would start out somewhere in the middle of the pack among my group of about 270 forecasters. Starting from behind stirred my competitive nature. Over time, I managed to claw my way to the leaderboard and finish in second place.

GJ: At the end of that first year, you were part of the first ever group of GJP superforecasters and were assigned to a team of 12 supers, including yourself. How did collaborating with other top forecasters change your forecasting experience? Do you think collaboration with your fellow Superforecasters still makes you a better forecaster?

J-P: I was thrilled to be part of a team of superforecasters and to be given the chance to compete against the best. I feel that way even today. I should point out that I am not alone in this regard. Five of the top seven forecasters in my group during the first year of the IARPA tournament were still active Good Judgment forecasters as recently as last year. I think that is a remarkable fact.

Collaborating with my fellow superforecasters was an amazing experience. I learned a lot from them, especially my teammates Sandy Sillman and Tim Minto, who were ranked as the #1 forecasters for years 1 and 3 of the IARPA tournament, respectively. I have no doubt that I have improved considerably as a forecaster since that first year. That would not have been possible had I been forecasting solo.

Today, I cannot see how I can improve or even maintain my current level of performance without the constructive criticism of my fellow Supers. Having access to the rationales of skilled forecasters has saved my Brier score on my many occasions.

GJ: When the commercial enterprise Good Judgment Inc began in the fall of 2015, you were invited to become a professional Superforecaster. One of the biggest differences between GJP and the commercial enterprise is the wider scope of forecasting questions addressed. (For example, the IARPA tournament was limited to geopolitical and economic questions that did not address US policies.) What topics have most interested you in the expanded scope?

J-P: I was very happy to see questions about US elections, military operations, and domestic policies. The absence of such questions in the IARPA tournament was frustrating, especially since that was what I had enjoyed the most.

GJ: Adam Grant discusses your outstanding track record in election forecasting, but of course, even the best forecasters can’t foresee every outcome. Looking back at forecasting the 2019 UK general election, which was tough to call (at least early on), what did you learn from that experience?

J-P: I did not perform well on the 2019 UK general election question. There was a brief time in late August 2019, though, when I thought the chances of a Conservative Party majority in the House of Commons were decent and that Labour would pay a higher-than-expected price for their position on Brexit. That was an outlier position, however. Unfortunately, I was not confident enough in my knowledge of British electoral politics to stick with that forecast for very long.

After the election, I kicked myself for missing a chance to separate myself from the crowd. On further reflection, however, I realize that I had a superficial understanding of the election and that I relied too much on the polls to make up for that. I knew very little about, say, public opinion on Brexit in the marginal seats.

GJ: Think Again was already in press by the time the 2020 US election results were known. How did you do in forecasting the 2020 elections?

J-P: I assigned my highest probability to the Democrats’ sweeping the Presidency, the Senate, and the House, which as we now know were the actual outcomes. I also correctly predicted voter turnout, presidential campaign spending, the timing of Trump’s “concession,” and when Trump would cease being president.

GJ: A central theme of Adam’s book is the importance of being willing to imagine how one could be wrong and to revise one’s opinions as a result. Just before Election Day in November, Good Judgment’s professional Superforecasters conducted a “pre-mortem” to imagine that our election forecasts were wrong and then think why that might be so. What if any changes did you make to your election forecasts as a result of that process? Why? With the benefit of hindsight, is there anything you wish you had done differently?

J-P: I think writing such a pre-mortem is a great idea and I have done so in the past. I did not make any changes to my election forecasts, however. The only reason why I would have been wrong about the high likelihood of a Biden victory was if the votes were suppressed in several battleground states or results overturned. Biden was going to win because the election was going to be a referendum on Trump, especially his handling of the pandemic.

Even in hindsight, I would not have changed my presidential election forecast. While it is true that Trump could have won the Electoral College if 66,000 more Trump voters in the “right states” went to the polls, by every other measure, the election was not that close.

I would have been more cautious about forecasting Democratic Party control of the US House of Representatives, however. As it stands today, Republican gains in 2020 mean that they only need a net pickup of five seats to control the House in the next midterm election. It should have been clear to me that Republican turnout was going to be much higher in 2020 than it was in 2018 and that the effect would be some reversion to the mean.

GJ: The final round of 2020 election forecasts was the January 2021 Georgia Senate races. How did you approach those forecasts? How did that work out for you?

J-P: I correctly predicted both Democrat candidates would win. What made the outcome especially gratifying, however, was that the pundits and prediction markets had mostly dismissed the possibility of a double win for the Democrats.

My forecast for the January 2021 Georgia Senate races was based on what happened in November 2020. One of the reasons why I predicted the Democrats would retake the Senate was because I had assumed that Raphael Warnock would win the run-off election and that Jon Ossoff would either win outright on Election Day or win in the run-off. Actual Election Day results confirmed what I expected—that the Democrats had a formidable turnout operation that would bring African Americans to the polls in record numbers and that Joe Biden would win the state. Warnock’s numbers indicated that he was a slight favorite while Ossoff’s chances were about 50%. The main reason to doubt their chances was if turnout for the run-off fit the pattern of off-cycle elections.

There were strong indications in the polling and early voting data that the January run-off elections would not be a typical Georgia special election, but an extension of Election Day 2020. The assumption that Democrats would stay home while Republicans would come out in force was tenuous. It ignored the fact that the Democrats had turned the state purple in the last election and ignored the high likelihood that Donald Trump’s absence from the ballot would depress Republican turnout. The final, decisive factor for me was the issue of the $2,000 relief checks, which gave infrequent voters a substantive reason to vote Democrat.

GJ: In addition to being a professional Superforecaster, you’re also a military historian. We understand that you’re turning your dissertation into a book. Can you tell us a bit about the subject and your approach?

J-P: My dissertation is a collective biography of the US Army Quartermaster Corps during the Early American Republic. In 1812, the US went to war with Great Britain with a dysfunctional system of supply and logistics, which had tragic consequences for both the army and the nation. I examine how wartime quartermasters somehow made the system work in spite of the difficulties and how they and likeminded civilian leaders drew lessons from their experience to reform the system after the war. The result of their postwar reform drive was a more efficient system of logistics and the emergence of a corps of professional logisticians.

GJ: What advice do you have for people who want to improve their forecasting skills – or more broadly, their decision-making skills?

J-P: I would advise people to question assumptions that are unsupported or weakly supported by the evidence. That is the best way to spot potential opportunities to set yourself apart from the crowd. For this gambit to work, however, you also need to become adept at evaluating evidence. Reading books about critical thinking should help in that regard.

I would also advise people not to trust their gut, at least when it comes to forecasting. Thinking with your gut is what pundits do and that is why they are so often wrong. I have noticed that my performance tends to decline when I rely on snap judgments or other shortcuts. A forecast should only be the result of System 2 thinking, to use Daniel Kahneman’s framework.

GJ: Thank you, J-P. We really appreciate your taking the time to give such thoughtful responses.

Intrigued?

Stop guessing. Start Superforecasting.

Schedule a consultation to learn how our FutureFirst monitoring tool, custom Superforecasts, and training services can help your organization make better decisions.

  • This field is for validation purposes and should be left unchanged.

Keep up with Superforecasting news. Discover training opportunities. Make better decisions. Sign up for Good Judgment’s newsletter.

  • This field is for validation purposes and should be left unchanged.