4

Thinking fast, slow, and not at all: System 3 jumps the shark

 3 years ago
source link: https://statmodeling.stat.columbia.edu/2021/05/23/thinking-fast-slow-and-not-at-all-system-3-jumps-the-shark/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Thinking fast, slow, and not at all: System 3 jumps the shark

Posted by Andrew on 23 May 2021, 5:16 pm

By now, we’re all familiar with the three modes of thought. From wikipedia:

System 1 is fast, instinctive and emotional.

System 2 is slower, more deliberative, and more logical.

System 3 is when you say things that sound good but make no sense.

System 3 can get activated when you trust what someone tells you rather than figuring it out yourself.

I thought about this after someone pointed out this post by Rachael Meager, who pointed out this erroneous claim in the new book, Noise, by Daniel Kahneman, Olivier Sibony, and Cass Sunstein.

We must, however, remember that while correlation does not imply causation, causation does imply correlation. Where there is a causal link, we should find a correlation. If you find no correlation between age and shoe size among adults, then you can safely conclude that after the end of adolescence, age does not make feet grow larger and that you have to look elsewhere for the causes of differences in shoe size. In short, wherever there is causality, there is correlation.

As Rachael points out, “this is not a case of experts simplifying a claim for a lay audience. This claim is just outright incorrect.”

It’s an interesting formulation when someone says, “We must remember X,” where X is a false statement. What is it exactly that we’re supposed to remember??

Rachael gives an example where there is causation but no correlation: “Imagine driving a car, reaching a hill and pumping the gas as you begin to go up so that your speed is constant. The correlation between pressing on the gas and the speed of the car is zero but they’re obviously causally related, it’s that the agent is optimizing speed!”

Strictly speaking, if your speed is constant, the correlation is not zero, it’s undefined. But, once you allow the speed to vary, you can get the correlation between speed and the position of the accelerator pedal to be positive, negative, or zero, even though in all cases pushing the accelerator makes the car go faster.

You can also get causation without correlation from a non-monotonic relationship or from plain old selection bias. So let me just emphasize that Rachael’s example is fine and there are a zillion others too. Causation and correlation are different things; it’s just not true to say that one implies the other.

“Why did we think they could get that one right?”

The question is, how could the authors of this book have made such a clear mistake?

To answer this question, we can turn to Cass Sunstein, one of the authors, who in an interview about the book says:

When a forecaster is wrong, we think, “Why did they make that mistake?” The better question is: “Why did we think they could get that one right?”

Well put.

The authors of this new book are a psychologist, a law professor, and some dude who describes himself as “a professor, writer and keynote speaker specializing in the quality of strategic thinking and the design of decision processes.” Between them, there’s no reason to think they’d have any particular expertise in correlation, causation, or statistics. You might as well ask me to have an opinion on the non-accelerating inflation rate of unemployment or the theory of operant conditioning. If I were to write a book and include categorical statements about such things, I’d check with the experts first. The relevant skill for Kahneman here was not to be an expert on statistics or econometrics but rather to realize that his coauthors are not experts either. A Washington Post reviewer called the authors an “all-star team,” but you wouldn’t want a baseball all-star team to play basketball (unless it included, I dunno, Jackie Robinson, Michael Jordan, and Danny Ainge), and I don’t know that you’d want a psychology/biz-school/law-school all-star team to be playing statistics. Again, though, maybe this is part of the problem. These guys get too much deference, more than is good for you. In sports, you ultimately have to face the music. In celebrity academia, once you’re high enough in the stratosphere, you can stay afloat forever.

Feynman famously said, “The first principle is that you must not fool yourself.” The second principle, I guess, is not to get fooled by others. Here’s what I’m guessing happened with this book:

1. One of the two other authors, Sunstein or Sibony, fooled himself into thinking he understood correlation and causation. I guess that if you’re a smooth enough writer you can use words to paper over any conceptual difficulties.

2. Kahneman was fooled into thinking his coauthors knew what they were talking about.

So I think the answer to Sunstein’s question, “Why did we think they could get that one right?”, is that, like that famously well-dressed emperor, they were surrounded by yes-men. And remember that Sunstein’s earlier reaction to being questioned was to liken the skeptics to the former East German secret police. Take someone who gets too much positive feedback, and who actively resists negative feedback, and that’s a recipe for overconfidence, which is, ironically, one of the biases that Kahneman discussed in his earlier book.

The chain of trust

We discussed this general issue a few years ago in the context of the unstable mix of skepticism and trust that was characteristic of the Freakonomics franchise. The skepticism came because one of the main themes of Freakonomics was how everything you thought was right, was wrong. Drunk walking is worse than drunk driving, global cooling rather than global warming, etc. The trust came because, after their first book, which was mostly based on author Levitt’s research, the Freaknomics franchise pretty much ran out of original research and was reduce to promoting the work of Levitt’s friends and various randos on the internet.

Something similar seems to have happened with Kahenman. His first book was all about his own research, which in turn was full of skepticism for simple models of human cognition and decision making. But he left it all on the table in that book, so now he’s writing about other people’s work, which requires trusting in his coauthors. I think some of that trust was misplaced.

The question then arises, how is it that luminaries such as Philip Tetlock, Max Bazerman, Robert Cialdini, Rita McGrath, Annie Duke, Angela Duckworth, Adam Grant, Jonathan Haidt, Steven Levitt, and Esther Duflo thought this book was so brilliant, essential, masterful, eye-opening, important, etc. (I took these from the blurbs on the book’s Amazon page.) The simplest answer to this question is that the book really is wonderful, it just has this one little mistake. Noise is indeed an important subject, and three authors who don’t understand correlation and causation can still write an excellent book on the topic. To return to our sports analogy, suppose a dream team of baseball experts were to write a general book about sports in which, as an aside, they say something like, “We must, however, remember, that in football if it’s fourth down and you’re too far away to kick a field goal, you should always punt. Only fools go for it.” It could still be a brilliant, essential, masterful, eye-opening, important book that just happens to contain this one little mistake.

A new continent?

In the above-linked interview, Sunstein says a few other things that bother me. One bit is where he writes:

One [of the things] I learned in this [book] collaboration is not to think in terms of [for instance], “Will this stock go up”? “Is this the right investment strategy?” but instead to think: “What’s the probability that this stock will go up?” “What’s the probability that this is a good investment strategy?” So rather than asking, “Is it good to invest in international stocks [versus] domestic stocks?”, it’s better to ask, “What probability do you assign to the proposition that international stocks will outperform domestic stocks in 2022?”

I mean, sure, yeah, think probabilistically. But . . . he learned that just now, in the past five years writing this book? Jeez . . . he was pretty naive five years ago. This seems like a commonplace insight to me. Don’t financial advisers tell you this all the time? We can’t know the future, we can only guess?

I mean, really, what the hell?? I’m reminded of that scene in one of David Lodge’s books where the professors of English are sitting in a circle, playing a game where they take turns listing famous books that, embarrassingly, they’ve never read. And one of them lists Hamlet. A bit too embarrassing, it turns out! Similarly, it’s kind of admirable how open Sunstein is about his former cluelessness, but it makes you wonder whether he was really the most qualified person to write a book about a topic that lots of people know about, but which until five years ago he’d never thought about.

Also, just a minor point but I don’t think it’s quite right to ask questions like “What’s the probability that this stock will go up?” I mean, sure, you can ask the question just to check that your investment advisor is on the ball, but I don’t think investment advisors should be thinking of the stock price going up or down as a binary outcome. The investment advisor should be thinking of things like expectation and tail risk. Anyway, not a big deal but perhaps revealing of Sunstein’s continuing discomfort with the concept of noise.

What really bothered me, though, was when Sunstein said:

Unlike bias, noise isn’t intuitive, which is why we think we’ve discovered a new continent.

At first I thought this was weird because, who does this guy think he is, Christopher Columbus? Also everybody knows about noise. I can’t expect Sunstein et al. to have heard of W. E. Deming and the quality control revolution, but he’s heard of Fischer Black, right? Right?

What kind of new continent is this?

But then I realized that Sunstein kinda is like Columbus, in that he’s an ignorant guy who sails off to a faraway land, a country that’s already full of people, and then he goes back and declares he’s discovered the place.

I guess in response he might say that Deming, Black, and a few zillion other statisticians and economists might know all about noise, but we haven’t conveyed it well to policymakers or the general public. And that could well be. I could well believe that, despite all our best efforts, we haven’t educated the world on the importance of noise, and so Sunstein et al. are doing the world a great service by expressing these ideas in a readable way. Fisher Black published his paper in the Journal of Finance. A lot more people will read a popular book than will ever read anything in the Journal of Finance. Deming wrote lots and lots, but I guess he’s mostly forgotten by now. So, again, a new book could be worthwhile.

To be fair, I guess we should interpret Sunstein’s “new continent” remark in a direct analogy to Columbus. When Sunstein says he discovered something, he’s saying that he recently became aware of something that was already well known, just not known in his social circle.

Let me just conclude with the final word from Sunstein in his interview:

I would speculate that bull—- -receptive people are likely to indulge their imaginations, and they might well be prone to being noisy. If someone is receptive to bull—- that’s in the form of seemingly profound or meaningful sentences that actually don’t mean anything at all, that’s a warning sign.

Well put. The guy’s got a way with words. The only thing I can’t figure out is why he says that “bull—- receptivity is not a positive thing.” It’s been good for him, no?

P.S. The question comes up in this sort of post: Why bother? Why should I care? I’m not completely sure, but one thing that bothers me about the nudgelords is that they’re going around telling everybody else what to do—or, more precisely, advertising their services to world leaders who can use their techniques to nudge us into doing stuff that they, the leaders, want us to do—but they don’t have their own house in order. It creeps me out that these people always seem to think they’re gonna be the nudgers and never the nudgees. Further discussion along these lines here.

I’m sure that my above post is unfair in the sense that these three people spent several years working hard on a book, and I’m basing my entire reaction on some combination of the title, a technical error that someone found, and an interview where one of the authors was maybe a bit too relaxed. These three pieces of information are in no way a summary of the actual book! Not even close. I’m bothered because I fear that these renowned authors may get a bit too much deference in book reviews (as in the above-noted blurbs) and I think we should be careful about that. I’m not sure we should take too seriously the musings on statistical noise of three authors who don’t have a firm grip on correlation and causation, and it seems that at least one of the authors had never thought about the topic until very recently. Hence this post. But the book could have great material. You can be the judge. If anything, this post might help sell a few copies!

P.P.S. The authors of Noise wrote a bit about their book in a recent op-ed. I think that article was mostly reasonable, but I’d prefer to use the term “variation” rather than “noise.” They say that it’s a bad thing that different judges, given the same facts, give different sentences. But different voters, given the same facts, choose different candidates, and we’re used to that! So I resist their implication that variation is a liability.

P.P.P.S. Some people emailed me about this post so let me briefly clarify. My concern about the Noise book is that, from the information I’ve seen so far, the authors don’t seem to know so much about the statistical aspects of what they’re writing about. There’s the above-mentioned error of correlation and causation (and let me emphasize that this is a general error, not dependent on the specifics of the particular example that Rachael happened to bring up), also one of the authors saying that the topic was pretty much entirely new to him. Statistics isn’t the only part of noise, but I’m a statistician so I’ll focus on that issue.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK