From Kahneman’s “Noise”
The Mediating Assessment Protocol. (From page 299)
- At the beginning of the process, structure the decision into mediating assessments. (For recurring judgments, this is done only once.)
- Ensure that whenever possible, mediating assessments use an outside view. (For recurring judgments: use relative judgments, with a case scale if possible.)
- In the analytical phase, keep the assessments as independent of one another as possible.
- In the decision meeting, review each assessment separately.
- On each assessment, ensure that participants make their judgments individually; then use the estimate-talk-estimate method.
- To make the final decision, delay intuition, but don’t ban it.
“I see a clear similarity between the evaluation of candidates and the evaluation of options in big decisions: options are like candidates.” (p. 291)
“This is all about setting the agenda for the board meeting in which we will discuss the deal,” she explained. “We should decide in advance on a list of assessments of different aspects of the deal, just as an interviewer starts with a job description that serves as a checklist of traits or attributes a candidate must possess. We will make sure the board discusses these assessments separately, one by one, just as interviewers in structured interviews evaluate the candidate on the separate dimensions in sequence. Then, and only then, will we turn to a discussion of whether to accept or reject the deal. This procedure will be a much more effective way to take advantage of the collective wisdom of the board.” (p. 291)
“Just like a recruiter in an unstructured interview, we are at risk of using all the debate to confirm our first impressions. “Using a structured approach will force us to postpone the goal of reaching a decision until we have made all the assessments.” (p. 292)
“The deal team’s mission, as Joan saw it, was not to tell the board what it thought of the deal as a whole—at least, not yet. It was to provide an objective, independent evaluation on each of the mediating assessments.” (p. 293)
“First, he explained, the team’s analysts should try to make their analyses as objective as possible. The evaluations should be based on facts—nothing new about that—but they should also use an outside view whenever possible.” (p. 294)
“To evaluate the probability that the deal would receive regulatory approval, he said, they would need to start by finding out the base rate, the percentage of comparable transactions that are approved.” (p. 294)
“Jeff then explained how to evaluate the technological skills of the target’s product development department—another important assessment Joan had listed. “It is not enough to describe the company’s recent achievements in a fact-based way and to call them ‘good’ or ‘great.’ What I expect is something like, ‘This product development department is in the second quintile of its peer group, as measured by its recent track record of product launches.’ ” Overall, he explained, the goal was to make evaluations as comparative as possible, because relative judgments are better than absolute ones.” (p. 294)
“Finally, it was time to reach a conclusion about the deal. To facilitate the discussion, Jeff showed the list of assessments on the whiteboard, with, for each assessment, the average of the ratings that the board had assigned to it.
One board member had a simple suggestion: use a straight average of the ratings. (Perhaps he knew about the superiority of mechanical aggregation over holistic, clinical judgment, as discussed in chapter 9.) Another member, however, immediately objected that, in her view, some of the assessments should be given a much higher weight than others. A third person disagreed, suggesting a different hierarchy of the assessments.
Joan interrupted the discussion. “This is not just about computing a simple combination of the assessment ratings,” she said. “We have delayed intuition, but now is the time to use it. What we need now is your judgment.” (p. 297)