A Quick Look Back at the Crowd Looking Forward

Posted on Posted in Uncategorized

Opinion

by Ryan Adler

Ryan Adler is a Superforecaster and Senior Consultant for Good Judgment who specializes in legal analysis for Good Judgment’s question writing team. He also administers the SCOTUS Challenge on Good Judgment Open, a collaboration between SCOTUSblog and Good Judgment that facilitates probabilistic thinking about court decisions. Ryan can be reached at adler@goodjudgment.com.

Before we kicked off our inaugural SCOTUS Challenge last November, I didn’t know what exactly to expect.   Being a bit late off the blocks, Good Judgment staff had a bit of catching up to do in order to get a solid array of case question served up.  Thankfully, with a bountiful docket and a Court not out to break any land speed records for its term’s cases and controversies, our thirteen ran the gamut from technical matters of law to the front page of every newspaper across the fruited plain.  But what did we learn about the crowd’s acumen for predicting the decisions of the nine most famous judges in the land (at least those without television contracts)?

Eight, four, and one would be respectable in many sports leagues, though the crowd’s tie in Dalmazzi makes a baseball analogy a bit of a stretch (though not unheard of).  Since Supreme Court challenges are probably not in ESPN’s development pipeline for programming, even for The Ocho, we should look at the results on their own merits.  While it would be easy to focus on the eight victories and gloss over the four misses, I would like to focus on why the crowd came up short where it did.

A former supervisor of mine often lamented the fact that it is nearly impossible to get a straight answer out of a lawyer.  It’s true, in fact, that what most people see as yes or no questions are rarely such in the eyes of attorneys.  Even so, framing our SCOTUS questions as yes or no problems is essential to fielding questions with the requisite rigor to encompass the many outlier outcomes on even the most straightforward questions.  Such outliers sprang up for the crowd’s weakest performances.

For starters, challenge forecasters simply missed on both Mansky and Wayfair, two cases that this Court watcher saw as relatively clear.  The threat of arbitrary state action to quell speech could be seen through any First Amendment lens with the former.  As for the latter, the fact that the Court took up a concededly-contrived challenge to the Court’s physical presence requirements for sales tax assessments ought to have been writing on the wall.  But as I’ve touched on before for this blog, the other three each had atypical outcomes.

Our tie in Dalmazzi (i.e. forecasting a 50% chance on a 50/50 question) wasn’t exactly a tie, because the Court wound up cancelling the game.  Here, with a case dismissal for having been improvidently granted (basically just saying, “Oh, your case?  Nevermind!”), some of the merit claims were fielded in a companion case, with the rest left to continue as thought experiments in legal circles.

Next, we had Microsoft, which was dismissed by the Court not because it changed its mind, but because Congress changed the law.  Some might say that SCOTUS may have been bummed out to see the legislative branch actually legislate rather than leaving it up to the judiciary, but that’s a proposition I’ll leave for Cato and The Federalist Society to make.  With the law in question, the Stored Communications Act, amended by the adoption of the CLOUD Act less than a month after oral arguments were held in the case, the incredible tension created by a legal system premised on the notion of territorial jurisdiction and the realities of the information age will have to come to a head another time.  But should forecasters have seen this coming?  Perhaps this miss by the crowd simply illustrates the exception to the rule that Supreme Court cases are not generally vulnerable to actionable developments.

For instance, forecasts about election outcomes are always susceptible to developments in the news.  Endorsements, fundraising, attacks, counterattacks, polling, and legions of skeletons in the closet can move the needle not just on a weekly basis, but an hourly basis.  New information is created and disseminated at an incredible pace, a pace which naturally motivates forecasters to pay close attention and to frequently update forecasts upon shifts in the political landscape.  That’s simply not the state of affairs with an appellate case.  Beyond oral arguments, there’s almost never a datum of direct information with which to divine the direction of the Court.  So, methinks that most of the nearly 300 forecasters on this question probably didn’t even think to check and see if Congress was preparing to yank the rug out from under the parties in this case, particularly since the CLOUD Act was bundled into a budget behemoth that found its way to President Trump’s desk.  So take this lesson when forecasting a court case: never forget that the foundations upon which legal questions are built are often only a vote and a signature away from collapse.

And last, but not least, the crowd really missed the mark on Gill v. Whitford.  This was especially disappointing since this question, by far, had more forecasts and forecasters than any other in the challenge.   And being one of the first cases to be argued and one of the last to be decided, forecasters were not under any time crunch to get this one right.  But standing?  What seemed to be an inevitable political earthquake with implications difficult to sufficiently quantify was left for another day because all nine justices decided that those claiming injury hadn’t shown that they could bring the case in the first place.  What lesson is to be drawn from here?  A simple and (what I would call) refreshing reminder that the magnitude of political weight being carried by a lawsuit doesn’t entitle it to a forum simply because that magnitude is so great.  If this seems a bit idealistic, that’s because it is a bit idealistic, and this writer is under no delusions as to the realpolitik from which no branch of government is truly immune.  That stated, forecasters who apprehend the realities of a case, including that which might seem mundane to the average guy or gal on the street, are the forecasters whose predictions you’d want to closely read.  Whether or not the devil is behind political gerrymandering is another question, but for forecasters, that the devil is always in the details is gospel.

To close, I would like to thank the 1,588 forecasters who participated in the SCOTUS Challenge, and specifically recognize two forecasters who rose to the top of the ranks in last term’s challenge.  Linh Tran, from Atlanta, received the lowest (which, of course, means best) Accuracy score at -2.659, with 12 of the 13 questions forecasted.  There is also Richard Green, an incoming 1L at UCLA from San Mateo, CA.  He pulled off a Brier score of 0.111 while forecasting on all 13 questions in the challenge.  Check out a detailed explanation on our scoring methods here.  For some context, the crowd scored a collective Accuracy score of 0.236, and a just-the-right-side-of-even Brier score of 0.446.

While our plans for the next term are still fluid, the US judiciary will not fall off Good Judgment’s radar come October 1st.  Check back with us at gjopen.com for a slew of opportunities to test your predictive mettle!

Share on FacebookTweet about this on TwitterShare on LinkedInEmail this to someone

Leave a Reply

Your email address will not be published. Required fields are marked *