I’ve spoken a few times with The Last Rationalist regarding Superforecasting; they were quite bullish on it, and suggested that it contained basically every lesson worth taking from the Sequences or AI to Zombies. It finally rose to the top of my stack, and I got to see if I agreed with TLR’s opinion.
I don’t know that I’d go quite that far, but it certainly does have a lot of overlap.
Philip Tetlock and Dan Gardner have written a serious page-turner, packed with fascinating insights about the fine art of being right. While they focus on predictions (shockingly, going by the title) much of what is said applies to being right in any domain – collecting information from many sources, not being bound to one ideological viewpoint, weighing differing perspectives, adjusting grossly or finely depending on the data one acquires, and actually updating when new data comes in – these ideas will take you far if your goal is epistemic accuracy.
Superforecasting doesn’t fear shooting sacred cows, either. Tetlock and Gardner point out several pundits, experts, and pontificators who aren’t following these processes to accuracy, and how their method (or lack thereof) has come up short when trying to predict the real world. They dig into the predictions that come up short, too, and they’re not afraid to point out how and why they fail.
Something held up comparably with the Sequences should of course have a fair amount to say about heuristics and biases, and Superforecasting doesn’t disappoint here, either. The availability heuristic, motivated stopping and continuing, the contrasting questions of “Does this require me to believe?” and “Does this allow me to believe?” are covered in enough detail to make their relevance to your ability to be correct clear.
Overall, I think Superforecasting was an excellent work, information-rich and well-written, and I’d recommend it to anyone who’s interested in the fine art of being less wrong.