“Evidence on Good Forecasting Practices from the Good Judgment Project”, 2019-02-15 (; backlinks; similar):
Figure 0: The “four main determinants of forecasting accuracy.” This graph can be found here, the GJP’s list of academic literature on this topic. The graph illustrates approximate relative effects. It will be discussed more in §2.
Experience and data from the Good Judgment Project (GJP) provide important evidence about how to make accurate predictions. For a concise summary of the evidence and what we learn from it, see this page. For a review of Superforecasting, the popular book written on the subject, see this blog.
This post explores the evidence in more detail, drawing from the book, the academic literature, the older Expert Political Judgment book, and an interview with a superforecaster.
…Tetlock describes how superforecasters go about making their predictions.56 Here is an attempt at a summary:
Sometimes a question can be answered more rigorously if it is first “Fermi-ized”, ie. broken down into sub-questions for which more rigorous methods can be applied.
Next, use the outside view on the sub-questions (and/or the main question, if possible). You may then adjust your estimates using other considerations (‘the inside view’), but do this cautiously.
Seek out other perspectives, both on the sub-questions and on how to Fermi-ize the main question. You can also generate other perspectives yourself.
Repeat steps 1–3 until you hit diminishing returns.
Your final prediction should be based on an aggregation of various models, reference classes, other experts, etc.