Book Review – How to Actually Change Your Mind

If I had to point to one book of the six that compose AI to Zombies as the single most important, it would be How to Actually Change Your Mind. Without the ability this book strives to teach, true rationality is not possible – you’d simply believe whatever you were told first, and nevermind that ‘evidence’ stuff.

Leading off with what is possibly the single piece of writing that is my most linked, the book delves into the tools, tactics, and thought patterns that will enable you to truly entangle your beliefs with reality. Eliezer takes no prisoners in explaining what humility is really for, why lotteries are a waste of hope, and how false dilemmas sneakily try to get us to argue against (or for) limited option sets, constraining us from looking at how reality really is.

The next section digs into politics and rationality, with classics like Politics is the mind-killer and Reversed stupidity is not intelligence. The ancestral environment shaped us to have certainty all out of line with our evidence in dealing with political matters, and further, to have our emotions tightly tied to these discussions. This section closes with Human Evil and Muddled Thinking, a mighty argument that unclear thought is a key ingredient of human evil.

Following that, Against Rationalization talks about how we fool ourselves, how we turn fiction into facts and how we take only the tiniest steps forward, when shown that we must move. I think the ideas that most intrigued me in this chapter are in Motivated Stopping and Motivated Continuation. I recognized them in some of my past behavior, and having names for them has helped me over the years to recognize them happening, and stop it.

I could dig into the next few chapters, Seeing with Fresh Eyes, Death Spirals, and Letting Go, but honestly, I’d rather you go and read it yourself – it’s an excellent work that deserves its place in rationalist canon, and I can’t say enough good things about it (I do have other things to do, after all!). Go forth, acquire How to Actually Change Your Mind and become stronger!

Book Review – Map and Territory

Earlier this year I read Rationality: From AI to Zombies(at the bottom of the page, or online here. At 1800 pages, it’s not an adventure for those afraid of thick books, but I’ve read Worm several times. I ain’t afraid of no page count!

While I could reasonably say that I’ve read the Sequences, I did so in kind of a scattershot fashion, over the course of years, and reread by hopping around among pages when I was in Colorado and between calls at work. I wanted to be sure I had in fact read everything, to refresh my memory of it, and to be able to say that I had done so with confidence.

I set myself to the task, and spent quite a few hours working my way through that mighty tome, coming out the far side more rational and stronger (I hope). Last week I read Superforecasting, and when I was talking to a friend of mine about it, they mentioned that given the replication crisis, shouldn’t we be reviewing and updating the sequences to make sure it wasn’t based in any research that didn’t replicate?

This struck me as a pretty good idea, enough so that I would have been willing to put my time into that project. Given this, I reached out to Rob Bensinger, Head of Research Communications at MIRI, since he’d done the original editing to make the Sequences into AI to Zombies, and asked if that was something that was going to happen.

He informed me that not only was AI to Zombies being updated, the first two books had already been released. Big win, and they went straight to the top of my to-read pile.

The first book, Map and Territory, took me four hours to go through. At 354 pages, it was a much less intimidating piece to take on than the whole stack of six-in-one, and the quality’s definitely gone up a bit – I’d be hard pressed to point to any single item, but I found it flowed more smoothly than the AI to Zombies version. It does an excellent job of explaining the relevant concepts, and of keeping you (me, at least) turning pages – I could see sitting down and reading it end to end without moving.

From Scope Insensitivity to The Simple Truth, Map and Territory has excellent flow and kept me hooked. Eliezer’s writing practically sparkles in this edition, with all of the polish that’s been added. If you haven’t read the Sequences yet, definitely pick up the new editions and remedy this. You’ll be glad you did!

Book Review – Superforecasting

I’ve spoken a few times with The Last Rationalist regarding Superforecasting; they were quite bullish on it, and suggested that it contained basically every lesson worth taking from the Sequences or AI to Zombies. It finally rose to the top of my stack, and I got to see if I agreed with TLR’s opinion.

I don’t know that I’d go quite that far, but it certainly does have a lot of overlap.

Philip Tetlock and Dan Gardner have written a serious page-turner, packed with fascinating insights about the fine art of being right. While they focus on predictions (shockingly, going by the title) much of what is said applies to being right in any domain – collecting information from many sources, not being bound to one ideological viewpoint, weighing differing perspectives, adjusting grossly or finely depending on the data one acquires, and actually updating when new data comes in – these ideas will take you far if your goal is epistemic accuracy.

Superforecasting doesn’t fear shooting sacred cows, either. Tetlock and Gardner point out several pundits, experts, and pontificators who aren’t following these processes to accuracy, and how their method (or lack thereof) has come up short when trying to predict the real world. They dig into the predictions that come up short, too, and they’re not afraid to point out how and why they fail.

Something held up comparably with the Sequences should of course have a fair amount to say about heuristics and biases, and Superforecasting doesn’t disappoint here, either. The availability heuristic, motivated stopping and continuing, the contrasting questions of “Does this require me to believe?” and “Does this allow me to believe?” are covered in enough detail to make their relevance to your ability to be correct clear.

Overall, I think Superforecasting was an excellent work, information-rich and well-written, and I’d recommend it to anyone who’s interested in the fine art of being less wrong.