From Max Bazerman’s “Judgement in Managerial Decision Making”

USE DECISION-ANALYSIS TOOLS (P. 218)


Because we do not make optimal decisions intuitively or automatically, when decision quality really matters, it makes sense to rely on procedures that can help direct us toward more optimal decisions. The field of study that specializes in giving this sort of prescriptive decision advice is generally called decision analysis. A number of books have distilled the field’s wisdom to provide useful guides for making decisions (for example, see Goodwin, 1999; Hammond, Keeney, & Raiffa,1999). These approaches usually require you to quantify both your preferences and the value you place on each of the various decision options. Rational decision making strategies also require you to be specific about the probabilities associated with uncertain future outcomes.”


“Decision analysis usually guides decision making using the logic of expected value. To compute an option’s expected value, you multiply its value by its probability. So, for instance, to compute the dollar value of a lottery ticket, you would need to multiply the dollar value of its payout by the probability of receiving that payout. Because the expected value of lottery tickets is almost always less than it costs to buy them, purchasing lottery tickets is usually not a good use of your money.”

“When a decision has multiple dimensions, such as a choice between two houses, one that is expensive and newly renovated and another whose price is more reasonable but that requires more work, the decision usually requires some sort of multi-attribute utility computation. This computation forces the decision maker to weigh her willingness to spend money against her willingness to perform home improvement work.”


“Often, however, businesses need to make a series of similar decisions over and over. For instance, corporations need to decide which applicants to hire. Executives need to decide which employees to promote and how big each employee’s bonus should be. Bank loan officers need to decide whether to extend credit to loan applicants. Venture capitalists need to decide whether to fund an entrepreneur’s new venture. These complex decisions can be guided by the use of a linear model.”

What is a Linear Model?


“A linear model is a formula that weights and adds up the relevant predictor variables in order to make a quantitative prediction. As an example, when his older son was five, Don asked the boy’s pediatrician to predict how tall Josh would grow to be. The pediatrician offered a simple linear model in response. She said that a child’s adult height is best predicted with the following computation. First, average the parents’ heights. Second, if the child is a boy, add two inches to the parents’ average. If the child is a girl, subtract two inches from the parents’ average. Innumerable linear models such as this exist to help us make informed predictions. A linear model called PECOTA, for instance, helps baseball teams predict players’ future performances using data such as their prior performances, ages, heights, and weights (Schwarz, 2005). There is even a company that uses a secretive linear model to help movie studios predict how much money their movies will earn
(Gladwell, 2006).”

Why Linear Models Can Lead to Superior Decisions


“Researchers have found that linear models produce superior predictions than experts across an impressive array of domains. In addition, research has found that more complex models produce only marginal improvements above a simple linear framework.”

“Dawes (1979) argues that linear models are superior because people are much better at selecting and coding information (such as what variables to put in the model) than they are at integrating the information (using the data to make a prediction). “

“Einhorn (1972) illustrates this point in a study of physicians who coded biopsies of patients with Hodgkin’s disease and then made an overall rating of disease severity. The individual ratings were not able to predict the survival time of the patients, all of whom died of the disease. However, the variables that the physicians selected to code did predict survival time when optimal weights were determined with a multiple regression model. The doctors knew what information to consider, but they did not know how to integrate this information consistently into valid predictions.”


In addition to having difficulty integrating information, we are also inconsistent. Given the same data, we will not always make the same decision. Our judgment is affected by mood, subjective interpretations, environment, deadlines, random fluctuations, and many others nonstable characteristics. In contrast, a linear model will always make the same decisions with the same inputs. Thus, such models capture the underlying policy that an expert uses while avoiding the expert’s random error.”

“Furthermore, experts are likely to be affected by certain biases triggered by specific cases. In contrast, linear models include only the actual data that are empirically known to have predictive power, not the salience or representativeness of that or any other available data.”

“In short, linear models can be programmed to sidestep biases that are known to impair human judgment. Such bias is common in financial decisions, corporate personnel decisions, bank loan decisions, and routine purchasing decisions. In each of these domains, the decision maker must make multiple routine decisions based on the same set of variables—a task well suited to a linear model. Such models allow the organization to identify the factors that are important in the decisions of its experts. Thus, independent of their superior predictive powers, the feedback and training opportunities provided by linear models make them a valuable managerial tool.”


Why We Resist Linear Models


While evidence amply supports the power of linear models, such models have not been widely used. Why not? Resistance to them is strong. Some have raised ethical concerns, such as this one described by Dawes:


I overheard a young woman complain that it was “horribly unfair” that she had been rejected by the Psychology Department at the University of California, Santa Barbara, on the basis of mere numbers, without even an interview. “How could they possibly tell what I’m like?” The answer is they can’t. Nor could they with an interview.

“Dawes argues that decision makers demonstrate irresponsible conceit in believing that a half-hour interview leads to better predictions than the information contained in a transcript covering three-and-a-half years of work and the carefully devised aptitude assessment of graduate board exams. Now consider the response that Max received when he asked a well-known arbitrator to make a number of decisions as part of a study of arbitrator decision making processes:”

You are on an illusory quest! Other arbitrators may respond to your questionnaire; but in the end you will have nothing but trumpery and a collation of responses which will leave you still asking how arbitrators decide cases. Telling you how I would decide in the scenarios provided would really tell you nothing of any value in respect of what moves arbitrators to decide as they do. As well ask a youth why he is infatuated with that particular girl when her sterling virtues are not that apparent. As well ask my grandmother how and why she picked a particular “mushmelon” from a stall of “mushmelons.” Judgment, taste, experience, and a lot of other things too numerous to mention are factors in the decisions (Bazerman, 1985).

“In contrast with this arbitrator’s denial of the possibility of systematically studying decision processes, research in this area does show that linear models are capable of capturing his decision-making model (or his grandmother’s choice of mushmelon).”


“Another argument commonly made against decision-analysis tools such as linear models is that they rule out the inclusion of intuitions or gut feelings.

In an apocryphal story, Howard Raiffa was on the faculty at Columbia and received an offer from Harvard. According to the story, he visited his dean at Columbia, who was also his friend, and asked for help with his decision. Sarcastically, the dean, borrowing from Raiffa’s writings on decision analysis, told Raiffa to identify the
relevant criteria, weight each criterion, rate each school on each criterion, do the arithmetic, see which school had the best overall score, and go there. “



Supposedly Raiffa protested, “No, this is a serious decision!” While he enjoys this story, Raiffa says it simply isn’t true. The more important the decision is, he continues to believe, the more important it is to think systematically about it.”

“Finally, people sometimes argue that the use of linear models will require difficult changes within organizations. What will bank loan officers or college admissions officers do when computers make the decisions? Such concerns express the fear that people are not necessary for linear models to make decisions. In fact, people play a crucial role in models. People decide which variables to put into the model and how to weight them. People also monitor the model’s performance and determine when it needs to be updated. Nevertheless, resistance to change is natural, and resistance to the use of linear models is clearly no exception. Overcoming a bias against expert-based, computer-formulated judgments is yet another step you can take toward improving your decision-making abilities.”

We will now look more closely at two domains in which evidence shows that linear models can lead to better organizational outcomes: graduate-school admissions decisions and hiring decisions.

Improving Admissions Decisions


“The value of using linear models in hiring, admissions, and selection decisions is highlighted by research on the interpretation of grades (Moore, Swift, Sharek, & Gino, 2010). There are substantial differences in the grading practices of colleges, even between institutions of similar quality and selectivity. It turns out that students from colleges with more lenient grading are more likely to get in to graduate school, even after controlling for the quality of the institution and the quality of its students. In one study, due to a variant of the representativeness heuristic called the correspondence bias (Gilbert & Malone, 1995), graduate schools mistook the high GPAs of alumni from lenient-grading institutions as evidence of high performance. The correspondence bias describes the tendency to take others at face value by assuming that their behavior (or their GPAs) corresponds to their innate traits. The researchers found that this bias persisted even when those making the admissions decisions had full information about different institutions’ grading practices. It seems that people have trouble sufficiently discounting high grades that are due to lenient grading.”

“By contrast, it would be easy to set up a linear program to avoid this error. Indeed, Dawes (1971) did just that in his work on graduate-school admissions decisions. Dawes used a common method for developing his linear model: he first modeled the admission decisions of a four-person committee. In other words, he systematically analyzed how the committee made its admissions decisions, relying on three factors: (1) Graduate Record Examination scores, (2) undergraduate GPA, and (3) the quality of the undergraduate school. Dawes then used the variable weightings he obtained from modeling the experts in a linear model to predict the average rating of 384 other applicants. He found that the model could be used to rule out 55 percent of the applicant pool without ever rejecting an applicant that the selection committee had in fact accepted. In addition, the linear model was better than the committee itself in predicting future ratings of the accepted and matriculated applicants by faculty! In 1971, Dawes estimated that the use of a linear model as a screening device by the nation’s graduate schools could result in an annual savings of about $18 million in professional time. Adjusted for today’s dollars and the current number of graduate-school applications, that number would easily exceed $500 million. And this figure neglects many larger domains, including undergraduate admissions and corporate recruiting.”

Improving Hiring Decisions


“Hiring decisions are among the most important decisions an organization can make. Virtually every corporation in the world relies on unstructured, face-to-face employment interviews as the most important tool for selecting employees who have passed through an initial screening process. The effectiveness of employment interviews for predicting future job performance has been the subject of extensive study by industrial psychologists. This research shows that job interviews do not work well. Specifically, employment interviews predict only about 14 percent of the variability in employee performance (Schmidt & Hunter, 1998). In part, this figure is so low because predicting job performance is difficult and few tools do it well. Yet some assessment tools do predict performance substantially better than the unstructured interview, and at a substantially lower cost.”


So why do people continue to believe so strongly in employment interviews?


Managers’ robust faith in the value of interviews is the result of a “perfect storm” of cognitive biases:

Availability: Interviewers may think they know what constitutes superior employee performance, but their information is highly imperfect. Few companies bother to collect useful data on the attributes that employees
need to succeed within specific positions or within the broader organization. As a result, managers must rely on their intuitions to determine whether or not a job candidate has the qualities needed for success.


Affect heuristic: People make very quick evaluations of whether they like others or not based on superficial features, such as physical attractiveness, mannerisms, or similarity to oneself (Ambady, Krabbenoft, & Hogan, 2006;Ambady & Rosenthal, 1993). Managers rarely revise these first impressions in the course of an employment interview (Dougherty, Turban, & Callender, 1994). Managers sometimes claim that interviews allow them to assess a potential candidate’s “fit” with the firm, but this assessment is usually not based on systematic measurement of a candidate’s qualities and is little more than the interviewer’s intuitive, affective response.

Representativeness: Intuition also leads managers to believe that if a person can speak coherently about her goals, the organization, or the job, then she will perform well at the job. For most jobs, however, interview performance is weakly related to actual job performance. Extroverted, sociable, tall, attractive, and ingratiating people often make more positive interview impressions than others. However, these traits are often less critical to job performance than other, less immediately observable traits, such as conscientiousness and intelligence.


Confirmation heuristic: After interviewing a number of people for a position and hiring one of them, managers only learn about the performance of the person selected. Without knowing whether that person is performing better than the rejected applicants would have, managers lack the data they would need to assess whether their selection mechanisms are effective (Einhorn & Hogarth, 1978).


What is a better alternative to face-to-face, unstructured employment interviews?

A number of other selection tools are available, most of which are less expensive to implement than interviews, including simple intelligence tests. But if organizations insist on conducting interviews, they ought to use structured ones in which all job candidates are reviewed by the same set of interviewers and in which each interviewer asks the same questions of each candidate (Schmidt & Hunter, 1998). In addition, interviewers’ quantitative assessments ought to be just one component fed into a linear model, along with intelligence measures, years of relevant work experience, and so on.

http://tamilkamaverisex.com
czech girl belle claire fucked in exchange for a few bucks. indian sex stories
cerita sex