From Philip Tetlock’s “Superforecasting”

Superforecasters have significantly beaten the CIA in in predicting the future, even without classified information!

What Traits do Superforecasters Have?

  • They score very high on active open-mindedness. “Beliefs are hypotheses to tested, not treasures to be protected” (p. 126)

  • They think slowly and systematically. They are cautious, nothing is certain.

  • They think in probabilities and are highly numerate.

  • They are granular. (for example, when they make predictions, they care very much the difference between 62% and 66%)

  • They try to get as much information and data as possible.

  • They value diverse views and synthesize them into their own.

  • They believe, “it’s more useful to pay attention to those who disagree with you than to pay attention to those who agree.”

  • When they get new data, they constantly update and revise their views.

  • They are not wedded to any idea or agenda. When facts change, they change their mind. They don’t believe changing your mind is a sign of weakness.

  • They took a training course to help them avoid cognitive Biases and Heuristics.

  • When they talk, they like to use words such as, “however,” “but,” “although,” and “on the other hand.”

  • They are reflective: Introspective and self-critical.

  • They are able to step back from the “tip-of-the-nose perspective” and look at other perspectives.

  • They score 70-80% higher than average in intelligence tests, but they are not in the top 1% in IQ (135 and up).

  • They are intellectually curious, and enjoy puzzles and mental challenges.

  • They have a strong desire to improve.

  • They forecast for intrinsic reasons, not for money or fame.

  • They are humble.




Quotes from the Book that Describe the Superforecasters’ Traits.

I’ll describe this in detail, but broadly speaking, superforecasting demands thinking that is open-minded, careful, curious, and—above all—self-critical. It also demands focus. The kind of thinking that produces superior judgment does not come effortlessly. Only the determined can deliver it reasonably consistently, which is why our analyses have consistently found commitment to self-improvement to be the strongest predictor of performance.”

“The superforecasters are a numerate bunch: many know about Bayes’ theorem and could deploy it if they felt it was worth the trouble. But they rarely crunch the numbers so explicitly. What matters far more to the superforecasters than Bayes’ theorem is Bayes’ core insight of gradually getting closer to the truth by constantly updating in proportion to the weight of the evidence.” (p. 171)

“I once asked Brian Labatte, a superforecaster from Montreal, what he liked to read. Both fiction and nonfiction, he said. How much of each, I asked? “I would say 70%…”—a long pause—“no, 65/35 nonfiction to fiction.” That’s remarkably precise for a casual conversation.” (p. 144)

Most people never attempt to be as precise as Brian, preferring to stick with what they know, which is the two- or three-setting mental model. That is a serious mistake. As the legendary investor Charlie Munger sagely observed, “If you don’t get this elementary, but mildly unnatural, mathematics of elementary probability into your repertoire, then you go through a long life like a one-legged man in an ass-kicking contest.” (p. 146)

“When Bill Flack makes a judgment, he often explains his thinking to his teammates, as David Rogg did, and he asks them to critique it. In part, he does that because he hopes they’ll spot flaws and offer their own perspectives. But writing his judgment down is also a way of distancing himself from it, so he can step back and scrutinize it: “It’s an auto-feedback thing,” he says. “Do I agree with this? Are there holes in this? Should I be looking for something else to fill this in? Would I be convinced by this if I were somebody else?” (p.123)

“Doug knows that when people read for pleasure they naturally gravitate to the like-minded. So he created a database containing hundreds of information sources—from the New York Times to obscure blogs—that are tagged by their ideological orientation, subject matter, and geographical origin, then wrote a program that selects what he should read next using criteria that emphasize diversity. Thanks to Doug’s simple invention, he is sure to constantly encounter different perspectives. Doug is not merely open-minded. He is actively open-minded.”

“Jean-Pierre Beugoms is a superforecaster who prides himself on his willingness “to change my opinions a lot faster than my other teammates,” but he also noted “it is a challenge, I’ll admit that, especially if it’s a question that I have a certain investment in.” (p. 161)

“Coming up with an outside view, an inside view, and a synthesis of the two isn’t the end. It’s a good beginning. Superforecasters constantly look for other views they can synthesize into their own. There are many different ways to obtain new perspectives. What do other forecasters think? What outside and inside views have they come up with? What are experts saying? You can even train yourself to generate different perspectives.” (p.123)

The strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement. It is roughly three times as powerful a predictor as its closest rival, intelligence. To paraphrase Thomas Edison, superforecasting appears to be roughly 75% perspiration, 25% inspiration.” (p. 192)

“”So why did one group do better than the other? It wasn’t whether they had PhDs or access to classified information. Nor was it what they thought—whether they were liberals or conservatives, optimists or pessimists. The critical factor was how they thought.”

Other Good Quotes From the Book:

the average forecaster who sticks with the tens—20%, 30%, 40%—is less accurate than the finer-grained forecaster who uses fives—20%, 25%, 30%—and still less accurate than the even finer-grained forecaster who uses ones—20%, 21%, 22%. (p. 145)

Forecasts must have clearly defined terms and timelines. They must use numbers. And one more thing is essential: we must have lots of forecasts.”

“Forecasters who use ambiguous language and rely on flawed memories to retrieve old forecasts don’t get clear feedback, which makes it impossible to learn from experience. They are like basketball players doing free throws in the dark.” (p. 184)

“Unfortunately, aggregation doesn’t come to us naturally. The tip-of-your-nose perspective insists that it sees reality objectively and correctly, so there is no need to consult other perspectives. All too often we agree. We don’t consider alternative views—even when it’s clear that we should.”

“Forecasters who practice get better at distinguishing finer degrees of uncertainty, just as artists get better at distinguishing subtler shades of gray.”

“How well aggregation works depends on what you are aggregating. Aggregating the judgments of many people who know nothing produces a lot of nothing. Aggregating the judgments of people who know a little is better, and if there are enough of them, it can produce impressive results, but aggregating the judgments of an equal number of people who know lots about lots of different things is most effective because the collective pool of information becomes much bigger.”

“A fox with the bulging eyes of a dragonfly is an ugly mixed metaphor but it captures a key reason why the foresight of foxes is superior to that of hedgehogs with their green-tinted glasses. Foxes aggregate perspectives.”

“Stepping outside ourselves and really getting a different view of reality is a struggle. But foxes are likelier to give it a try. Whether by virtue of temperament or habit or conscious effort, they tend to engage in the hard work of consulting other perspectives.”

“the CIA gives its analysts a manual written by Richards Heuer, a former analyst, that lays out relevant insights from psychology, including biases that can trip up an analyst’s thinking. It’s fine work. And it makes sense that giving analysts a basic grasp of psychology will help them avoid cognitive traps and thus help produce better judgments.

“What there is instead is accountability for process: Intelligence analysts are told what they are expected to do when researching, thinking, and judging, and then held accountable to those standards. Did you consider alternative hypotheses? Did you look for contrary evidence? It’s sensible stuff, but the point of making forecasts is not to tick all the boxes on the “how to make forecasts” checklist. It is to foresee what’s coming. To have accountability for process but not accuracy is like ensuring that physicians wash their hands, examine the patient, and consider all the symptoms, but never checking to see whether the treatment works.”

“So if a question with a closing date six months in the future opened, a forecaster could make her initial judgment—say, a 60% chance the event will happen by the six-month deadline—then read something in the news the next day that convinces her to move her forecast to 75%. For scoring purposes, those will later be counted as separate forecasts. If a week passes without her making any changes to the forecast, her forecast stays at 75% for those seven days. She may then spot some new information that convinces her to lower her forecast to 70%, which is where the forecast will stay until she changes it again. The process goes on like this until six months pass and the question closes. At this point, all of her forecasts are rolled into the calculation that produces the final Brier score for this one question.”

“superforecasters’ initial forecasts were at least 50% more accurate than those of regular forecasters. Even if the tournament had asked for only one forecast, and did not permit updating, superforecasters would have won decisively.” (p. 155)

“This is an extreme case of what psychologists call “belief perseverance.” People can be astonishingly intransigent—and capable of rationalizing like crazy to avoid acknowledging new information that upsets their settled beliefs.” (p. 160)

The Yale professor Dan Kahan has done much research showing that our judgments about risks—Does gun control make us safer or put us in danger?—are driven less by a careful weighing of evidence than by our identities, which is why people’s views on gun control often correlate with their views on climate change, even though the two issues have no logical connection to each other. Psycho-logic trumps logic. And when Kahan asks people who feel strongly that gun control increases risk, or diminishes it, to imagine conclusive evidence that shows they are wrong, and then asks if they would change their position if that evidence were handed to them, they typically say no. That belief block is holding up a lot of others. Take it out and you risk chaos, so many people refuse to even imagine it.” (p. 162)

People would disagree with someone’s assessment, and want to test it, but they were too afraid of giving offense to just come out and say what they were thinking. So they would “couch it in all these careful words,” circling around, hoping the point would be made without their having to make it. Experience helped. Seeing this “dancing around,” people realized that excessive politeness was hindering the critical examination of views, so they made special efforts to assure others that criticism was welcome. “Everybody has said, ‘I want push-back from you if you see something I don’t,’ ” said Rosenthal. That made a difference. So did offering thanks for constructive criticism. Gradually, the dancing around diminished.” (p. 202)

“Be careful about making assumptions of expertise, ask experts if you can find them, reexamine your assumptions from time to time.” (p. 186)

“They are judgments that are based on available information and that should be updated in light of changing information. If new polls show a candidate has surged into a comfortable lead, you should boost the probability that the candidate will win. If a competitor unexpectedly declares bankruptcy, revise expected sales accordingly.” (p. 153)

“So there are two dangers a forecaster faces after making the initial call. One is not giving enough weight to new information. That’s underreaction. The other danger is overreacting to new information, seeing it as more meaningful than it is, and adjusting a forecast too radically. Both under- and overreaction can diminish accuracy. Both can also, in extreme cases, destroy a perfectly good forecast.” (p. 158)

“A similar mistake can be found by rummaging through remaindered business books: a corporation or executive is on a roll, going from success to success, piling up money and fawning profiles in magazines. What comes next? Inevitably, it’s a book recounting the successes and assuring readers that they can reap similar successes simply by doing whatever the corporation or executive did. These stories may be true—or fairy tales. It’s impossible to know. These books seldom provide solid evidence that the highlighted qualities or actions caused happy outcomes, much less that someone who replicates them will get similarly happy outcomes. And they rarely acknowledge that factors beyond the hero’s control—luck—may have played a role in the happy outcomes.”

http://tamilkamaverisex.com
czech girl belle claire fucked in exchange for a few bucks. indian sex stories
cerita sex