I highly, highly recommend The Scout Mindset: Why Some People See Things Clearly, and Others Don’t.
I’m going to admit, right off the bat, that this review is going to benefit from my having had to read Don’t Be a Feminist, which is just so awful that really anything is going to look like a masterpiece in comparison. And yet, this is really just, apart from any comparison, a book worth reading.
In fact, I’m going to go as far as to say that if you’re a person who has any kind of influence in the thought-leader sense, it is probably required reading. If you’re not already on board with this book’s premise, then honestly to go forward in any way ethically you really either need to read this and incorporate its main points into your worldview or just stop thought leadering.
If every thought leader, commentator, Twitter user, etc. actually adopted a Scout Mindset, the world would be a remarkably better place.
Onto the book.
Mediocre content either doesn’t ask interesting questions or asks them and then answers them stupidly. Great content asks great questions and then answers them, maybe not completely because great questions really can’t be answered completely, but in such a way as to easily spin off in your mind other, related questions.
Everything Everywhere All at Once is a good example of good content. It asks without answering a lot of really interesting questions. It’s to Scout Mindset author Julia Galef’s credit that she brings up so many interesting ideas without really delving into them because they’re outside of the main focus of the book.
The book posits that there are two main mindsets when approaching a question. The first, default mindset is the soldier mindset. It wants to win the argument, shore up the existing belief, buttress the ego. The scout mindset, on the other hand, wants to discover the truth, no matter how inconvenient, dangerous, or humbling it may be. The soldier asks, can I believe it? Must I believe it? The scout asks, is it true?
The other bit of bias I’m going to admit to here is that I’m honestly pretty jealous of Julia Galef. She’s got a pretty plumb gig, interviewing really smart, interesting people for her podcast. My main criticism of her podcast is part of what I think makes me feel jealous. She just kind of elides her guests’ misogyny, racism, colonialism, etc.
In a way, this might be the only way one can have a lot of really interesting conversations. No one is subscribing to her podcast for a cogent explanation of latent, structural white supremacy. If she were so inclined to challenge her guests on those axes, they might, possibly, walk away better-informed people. But what’s more likely is that people would unsubscribe and many potential guests would turn her down. The people who would subscribe would not only likely want more conversations around structural oppression, but they’d more likely be the kind of people who have trouble having conversations with people who don’t already see these topics the same way. So then her podcast would just become groupthink again, but coded left.
I feel like I can’t have these conversations to the same extent because I can’t very well judge her for not pushing back on her guests if I’m not willing to do it myself. And I also don’t want to choose between excluding guests I disagree with on bigotry and calling them out for every instance of wrongthink.
This is hardly the only instance of my own left-coded groupthink, exclusionary tendencies. I wrote about how I didn’t want to go to a gathering with a slavery apologist in attendance. I’d also avoided content from the Lincoln Network due to its Thiel affiliation. But lately I’ve been trying to overcome this tendency to refuse to engage with people I disagree with and as a result I’ve been loving the Realignment Podcast, though it does make me mad sometimes.
I noticed this knee-jerk reaction when Galef brought up Chesterton’s Fence. I don’t remember ever hearing this concept invoked by, or about, anyone but conservatives. It’s obviously right-coded to me. Which was like, yuck. And, honestly, it’s a truly unnecessary reference as she had to explain what it means and then she also still explained what she’s doing.
But, back to the book itself.
In sum, soldier mindset helps us feel better in the moment.
Scout mindset, by contrast, helps us do better over the long-term. When we prioritize being more right to feeling more right our relationships are better, our habits are better, and we make better decisions.
I liked this book right away, Chesterton’s Fence aside, because honestly realizing this was a major turning point in my life. I didn’t have the soldier/scout language, of course. But I do remember that at some point I realized my conversations, and life, would improve if I could put less effort into winning arguments and more into learning. The rewards of convenient answers are far shorter lived than answers that line up with empirical reality. And admitting I’m wrong costs far less over the long-term than remaining wrong.
One advantage of the scout mindset she didn’t mention, I believe, is that it facilitates a far more interesting existence.
(I recently read 4 Rules for Identifying Your Life’s Work wherein Brooks recommends choosing work you find interesting because “Hedonia without eudaimonia devolves into empty pleasure; eudaimonia without hedonia can become dry. In the quest for the professional marshmallow, I think we should seek work that is a balance of enjoyable and meaningful. At the nexus of enjoyable and meaningful is interesting. Interest is considered by many neuroscientists to be a positive primary emotion, processed in the limbic system of the brain. Something that truly interests you is intensely pleasurable; it also must have meaning in order to hold your interest. Thus, ‘Is this work deeply interesting to me?’ is a helpful litmus test.”)
The truth is stranger than fiction, they say. It’s also much more fascinating. As I’ve written before, I can’t say existence is good or bad, pleasant or unpleasant, on the whole. What I can say for existence is that it seems a lot more interesting than non-existence. So, in a way, scout mindset makes life more… lifelike.
I definitely wonder who Galef wrote Chapter 8, Motivation Without Self-Deception, for. What percentage of people take big risks while deceiving themselves about their odds of success? Now, by comparison, what percentage of people delay or avoid making any decision at all while they weigh the risks versus rewards well past the point at which any decision would have been better than no decision? Those motivational posters saying “Just go for it” exist not because most people are “just going for it” but precisely because the vast, vast majority of us are dithering, risk-averse cowards who need a ton of prodding to make any decision that feels at all consequential. And I feel confident that the Less Wrong nerds who listen to her podcast and likely are reading this book are no exception to this rule.
What the chapter ignores about decision-making and risk-taking is the most underrated aspect of it, imho. TIME. The point at which further delay in making a choice exceeds any potential benefit further research or deliberation might offer comes WAYYYY sooner than the vast, vast majority of people realize the vast, vast majority of the time. And what she’s essentially saying is it’s worthwhile to do your research into the likelihood of success before taking a risk. SURE. FINE. But that’s not the advice most people need most of the time. They’re already doing their research. They’re already weighing the odds. The advice on decision making most people need most of the time is this: You’re going to have a much better time if you make more mediocre decisions faster. You’re going to learn more, have a more interesting time, accomplish more, get more accolades, etc. if you make 12 okay decisions in one 8-hour workday than if you make one extremely well researched and considered decision in the same timeframe. Now, there’s a very small percentage of the population who needs the opposite advice. And maybe Galef wrote her chapter for them. I don’t know. But that would be an interesting choice.
I have mixed emotions on Chapter 10: How to Be Wrong. First, I think the ability to update your beliefs in light of new evidence is honestly on par with Godliness. Like, I have so much respect for anyone who can do this regularly. It’s really, really hard and super rare. It’s actually harder to correct smart people when we’re wrong because we’re so good at coming up with convincing arguments for why we’re right, even when we’re not. Learning how to admit I was wrong without falling into believing I’m stupid was a huge benefit to me. Not that I always do it right away, but at least I know how.
Galef suggests basically removing all stigma from being wrong, going as far as to suggest the Rationalist habit of referring to having been misled as “updating.” As in, “I’m not admitting I’m wrong. I’m updating my prior beliefs with new, more factually correct ones.”
Now, my babies, farrr be it from me to fault anyone for an overcorrection. Overcorrection is my middle name. But this does remind me of one thing that annoys me about the pre/post/alt rat community (particularly the very online subsection of it). It is this. On the one hand, what I like about the rats is they’re willing to have, as I mentioned above, interesting conversations. OTOH, what’s often required to have these kinds of conversations is an emotional remove from what’s being discussed. And what better way to find yourself at an emotional remove than to discuss topics where the stakes for you, personally, are quite low.
Like, I suspect humanity benefits from smart people dispassionately weighing the ROI of malaria-preventing bed nets versus free PREP for trans sex workers of color. As long as donor dollars remain finite, let’s spend them well.
But malaria victims and trans sex workers of color are going to have a much harder time staying dispassionate in such a discussion than someone for whom neither malaria nor HIV are particularly likely outcomes.
Rats often seem to believe they’re able to be dispassionate about these topics because they’re smarter or better thinkers or trying harder than the people who are in a rage when it seems much more likely that most of the time they’re so much less emotionally invested because there’s just so much less on the line for them in any of these discussions because they’re so damn privileged. Which is fine. Be privileged. But don’t piss on my leg and tell me it’s raining. If you’re so damn smart, why aren’t you more self-aware?
Anyhoods, where I’m going with this is that if nothing you say has any consequence then sure, there’s nothing to regret about an error. You can just update and go on with your day. But if you have any influence, then being wrong presents the opportunity, at least, to cause someone harm. As I’ve written, no one has time to fact-check every piece of information they come across. Galef tells the story of Jerry Taylor, who misled many influential people for decades about climate change. I think Taylor regrets his error and that is as it should be.
I don’t want people to be embarrassed about being wrong, especially if it means they stay wrong longer. I want people to recognize that stating falsehoods can hurt people. I hope this sense of responsibility motivates them to double check their facts more often, more quickly publicly admit when they’re wrong, etc.
Anyhoods, my babies. It’s a good book. I liked it. Read it. Live it.
~~~~~
This ⬇️ is an affiliate link! Sign up today to support me!
Join the reading revolution! Get key ideas from bestselling non-fiction books, distilled by experts into bitesize text and audio. Explore our vast library of over 5,500 titles and stay up-to-date with 40 new titles added each month.
The epistemic consequences of “skin in the game” are hard to consider fairly and effectively, for sure. AFAICT typically when someone has a personal stake in a discussion outcome, the life situation that gives them that stake imparts *both* insight and bias. And the two are very difficult to disentangle. And our soldier mindset leads us to see only the insight when we agree with the person, and only the bias when we disagree.
Thus we get “standpoint epistemology” that can correctly advise us to listen for the insight of the most-affected people, but can also discount their bias and irrationally make their group membership a trump card. And on the other hand we get arguments of the form “you’re just saying that because you’re an X”. And more complicated soldier rationalizations like “you say my belief Y is biased against members of group X, but that can’t be right, because here’s this member of group X I found on YouTube/Substack/etc espousing Y”— and rationalizing counterarguments like “well that person isn’t *really* a valid member of group X” or “they must be self-hating/internalizing anti-X bias”— and on and on.
I don’t have a good solution, but it’s a problem I wish Galef had spent more time considering.
Absolutely love this -- LOTS to think about (and lots of things to look up!)
Sorry I’ve been so silent recently - life ya know.