About > Our Team > Superforecaster Profiles > Kjirste Morrell

GJ: You joined the Good Judgment Project in its final year, 2014-2015. How did you learn about GJP, and why did you decide to participate?

KM: I heard about GJP on an NPR segment in spring of 2014. I can’t recall if my interest was piqued first by hearing the interview with Elaine Rich or by someone telling me about it. Either way, my husband suggested I would be good at forecasting, I was curious, and so I signed up.

GJ: As someone with a PhD in mechanical engineering from MIT, geopolitical forecasting (the main subject of the Good Judgment Project) doesn’t seem an obvious fit. How has your background in science and engineering most helped you as a forecaster?

KM: My first impression when I saw the actual questions in GJP was to think, “What have I gotten myself into!” My background is not relevant at all to the types of questions that we are asked. On the other hand, I’m pretty good at research and digging into unfamiliar material. As an engineer I had a lot of experience picking up projects that someone else had started and that I knew nothing about. There’s a very uncomfortable lost feeling when you start out from zero, but I had learned that feeling wouldn’t last long. Actually, I enjoy the process of coming up to speed on a new topic. Also, I try to pay attention to data and modify my understanding of a system or situation based on the information available. I think that comes from my engineering background and I’ve found it helps in forecasting too.

GJ: You were part of an unusual experimental condition in that final year of GJP. Participants in your group made forecasts like everyone else, but then were presented with rationales that other forecasters had given for their forecasts on the same questions. You were asked to rate the quality of those rationales and then given an opportunity to update your own forecast. It sounds almost like a recipe for “thinking again,” which is the subject of Adam Grant’s new book. How did that experience influence the way you approach forecasting now?

KM: I’m not sure how the experimental condition I was in for GJP informs my current forecasting. One possibility is that from the start I wasn’t sure if others were reading and rating my forecasts. I got in the habit of trying to outline my thinking and present evidence in a way that someone else might be able to follow. Going through that process clarifies my thinking and sometimes leads me to different conclusions than I had when I started.

GJ: You have consistently been one of the most accurate professional Superforecasters; however, Adam Grant discusses one of your relatively unusual “misses” – forecasting the 2016 Presidential election. Think Again was already in press by the time the 2020 US election results were known. How did you do in forecasting the 2020 elections? What lessons did you learn from forecasting the 2016 elections that helped you in 2020?

KM: On 2020 election questions, my score was near the Superforecaster median partly because I withdrew from forecasting most US election questions for much of the year and took the median for those days as a result. Instead, I spent my time working for the campaigns of candidates I supported. I worked on the 2016 election too but threw myself in completely in 2020.

Maybe it isn’t a good idea to forecast on something you care about passionately, especially if you’re actively working to influence the outcome. I believed that before 2016. Forecasting well requires a level of objectivity that conflicts with the kind of commitment needed to pursue something whole-heartedly, it seems to me.

If an outcome matters to you deeply and you have a way to affect that outcome, then committing to action might be a better choice than forecasting what will happen. Probably it’s a good idea to rely on someone else’s forecast then, if you need one.

GJ: A central theme of Adam’s book is the importance of being willing to imagine how one could be wrong and to revise one’s opinions as a result. Just before Election Day in November, Good Judgment’s professional Superforecasters conducted something called a “pre-mortem” to imagine that our election forecasts were wrong and then think why that might be so. What if any changes did you make to your election forecasts as a result of that process? Why? With the benefit of hindsight, is there anything you wish you had done differently?

KM: I didn’t make large changes in my forecast as a result of the pre-mortem exercise around the election. In retrospect, I should have placed more credence on the possibility that there would be violence after the election, which I think was mentioned during the pre-mortem exercise.

Kjirste’s helper, Sebastian

GJ: Adam Grant talks about “the joy of being wrong.” Do you actually enjoy being wrong? In what respect? How does it make you a better forecaster?

KM: I don’t enjoy being publicly wrong, especially finding out after the fact that my reasoning was flawed. I enjoy discovering a different, maybe better, way to think about something. Like turning a jigsaw puzzle piece around and suddenly realizing exactly how it fits, there’s an element of surprise and the pleasure of figuring something out.

In forecasting, I find it enjoyable when I reach a different conclusion after puzzling over a question than what I thought when I started. My initial idea may have been wrong but deciding that myself doesn’t feel the same as having a mistake pointed out by someone else. Thinking carefully takes effort and the conclusions aren’t always different or interesting, but the potential for that frisson of surprise with a new insight motivates me to make the effort. I’ve usually done well on forecasts when that happens.

GJ: For the past year, Good Judgment has been providing forecasts on a number of COVID-related questions, including forecasting cases and fatalities as well as when vaccines would become available and how rapidly they would be distributed to the US and other countries. The professional Superforecaster “crowd” includes a handful of people with specific expertise in public health and related areas, but most Superforecasters are not subject-matter experts in these fields. How have you and other pro Supers approached these questions? Why do you think that the Supers’ collective forecasts have often been more accurate than the output of epidemiological models?

KM: I think the collective forecasts on Covid have functioned similar to an ensemble model; there are a variety of approaches and each works better at different times. My approach has been to look at the data frequently, look at forecasts produced by other groups, and also consider observations and expectations of people’s behavior. I rely most on daily case data and test positivity rate. One of the challenges has been the number of directional changes, such as the number of cases rising or falling, and recognizing when those occur.

GJ: What advice do you have for people who want to improve their forecasting skills – or more broadly, their decision-making skills?

KM: My advice for making better forecasts is to slow down and think carefully. Reason through your thinking step by step as if you were explaining it to someone else. If you’re stumped, sometimes it helps to cast a really wide net and imagine all the possible things that could affect the forecast, even those that sound ridiculous.

GJ: Thank you, Kjirste. We’ve enjoyed the conversation, and we really appreciate your taking the time to give such thoughtful responses.

Intrigued?

Stop guessing. Start Superforecasting.

Schedule a consultation to learn how our FutureFirst monitoring tool, custom Superforecasts, and training services can help your organization make better decisions.

  • This field is for validation purposes and should be left unchanged.

Keep up with Superforecasting news. Discover training opportunities. Make better decisions. Sign up for Good Judgment’s newsletter.

  • This field is for validation purposes and should be left unchanged.