From Michael Mauboussin’s book, “The Success Equation”

“Much of what we experience in life results from a combination of skill and luck. A basketball player’s shot before the final buzzer bounces out of the basket and his team loses the national championship. A pharmaceutical company develops a drug for hypertension that ends up as a blockbuster seller for erectile dysfunction. An investor earns a windfall when he buys the stock of a company shortly before it gets acquired at a premium. Different levels of skill and of good and bad luck are the realities that shape our lives. And yet we aren’t very good at distinguishing the two. Part of the reason is that few of us are well versed in statistics. But psychology exerts the most profound influence on our failure to identify what is due to skill and what is just luck. The mechanisms that our minds use to make sense of the world are not well suited to accounting for the relative roles that skill and luck play in the events we see taking shape around us. Let me start with some examples that are clearly controlled by either luck or skill.” (p. 2)

“The key to statistical prediction is to figure out how much weight you should assign to the base rate and specific case. If the expected accuracy of the prediction is low, you should place most of the weight on the base rate. If the expected accuracy is high, you can rely more on the specific case.”

“In this example, the doctor gave the patient no reason to believe that the procedure had better than a 50/50 chance of working for him. So the patient should place almost no weight on the specific evidence that it worked for one patient, and should rely instead on the base rate in making his decision. Here’s how the weighting of the base rate and the specific case relate to skill and luck. When skill plays the prime role in determining what happens, you can rely on specific evidence. If you’re playing checkers against Marion Tinsley, you can easily predict the winner on the basis of your knowledge of Tinsley’s deadly skill. In activities where luck is more important, the base rate should guide your prediction. If you see someone win a million dollars, that doesn’t change the odds of winning the lottery. Just because someone wins at roulette, it doesn’t help you to guess where the ball will end up on the next spin. Unfortunately, we don’t usually think this way. When we make predictions, we often fail to recognize the existence of luck, and as a consequence we dwell too much on the specific evidence, especially recent evidence. This also makes it tougher to judge performance. Once something has happened, our natural inclination is to come up with a cause to explain the effect. The problem is that we commonly twist, distort, or ignore the role that luck plays in our successes and failures. Thinking explicitly about how luck influences our lives can help offset that cognitive bias.”

“Untangling skill and luck is an inherently tricky exercise, and there are plenty of limitations, including the quality of the data, the sizes of samples, and the fluidity of the activities under study. The argument here is not that you can precisely measure the contributions of skill and luck to any success or failure. But if you take concrete steps toward attempting to measure those relative contributions, you will make better decisions than people who think improperly about those issues or who don’t think about them at all. That will give you an enormous advantage over them. Some statisticians, especially in the world of sports, come across as know-it-alls who are out of touch with the human side of things. This characterization is unfair. Statisticians who are serious about their craft are acutely aware of the limitations of analysis. Knowing what you can know and knowing what you can’t know are both essential ingredients of deciding well. Not everything that matters can be measured, and not everything that can be measured matters.” (p.9)

“Consider a basketball player who makes 70 percent of her free-throw shots over a long season. You wouldn’t expect that player to make seven out of every ten shots she takes. Rather, some nights she might make 90 percent of her free throws and other nights only 50 percent. Even if she trains constantly at improving her free throws, she’ll experience the variation that arises from the workings of the neuromuscular system, which relies on a completely different system of memory from the one that allows us to recall facts. An athlete can reduce that variation in performance through practice, but removing it altogether is virtually impossible.” (p 15)

“The other point is that the very effort that leads to luck is a skill. Say that you need to complete ten interviews with prospective employers to receive one job offer. Individuals who seek only five interviews may not get an offer, but those who go through all ten interviews will have an offer in hand by the end of the process. Getting an offer isn’t luck, it’s a matter of effort. Patience, persistence, and resilience are all elements of skill.” (p. 17)

“The dictionary defines skill as the “ability to use one’s knowledge effectively and readily in execution or performance.” It’s hard to discuss skill in a particular activity without recognizing the role of luck. Some activities allow little luck, such as running races and playing the violin or chess. In these cases, you acquire skill through deliberate practice of physical or cognitive tasks. Other activities incorporate a large dose of luck. Examples include poker and investing. In these cases, skill is best defined as a process of making decisions. So here’s the distinction between activities in which luck plays a small role and activities in which luck plays a large role: when luck has little influence, a good process will always have a good outcome. When a measure of luck is involved, a good process will have a good outcome but only over time. When skill exerts the greater influence, cause and effect are intimately connected. When luck exerts the greater influence, cause and effect are only loosely linked in the short run. There’s a quick and easy way to test whether an activity involves skill: ask whether you can lose on purpose. In games of skill, it’s clear that you can lose intentionally, but when playing roulette or the lottery you can’t lose on purpose.” (p. 19)

“In activities where luck plays a larger role, skill boils down to a process of making decisions. Unlike a piano virtuoso, who will perform at a high level every night, an investor or a businessperson who makes a good decision may suffer unwelcome consequences in the short term because of bad luck. Skill shines through only if there are a sufficient number of decisions to weed out bad luck.” (p. 21)

“Ma and his team were acutely aware of the influence that luck could have and therefore stayed focused on their decision-making process. Indeed, Ma recounts an instance when he lost $100,000 in just two rounds over the course of ten minutes, even though he played his cards just right: “The quality of the decision can be evaluated by the logic and information I used in arriving at my decision. Over time, if one makes good, quality decisions, one will generally receive better outcomes, but it takes a large sample set to prove this.” In other words, he has to place a lot of bets in order to win, because this game involves a lot of skill but it also involves a lot of luck.” (p. 21)

“The record of people forecasting the behavior of a complex system, whether it’s prices in the stock market, changes in population, or the evolution of a technology, is amazingly bad. Impressive titles and years of experience don’t help, because the association between cause and effect is too murky. The conditions are changing constantly, and what happened before may not provide insight into what will happen next. Professor Gregory Northcraft, a psychologist at the University of Illinois, sums it up: “There are a lot of areas where people who have experience think they’re experts, but the difference is that experts have predictive models, and people who have experience have models that aren’t necessarily predictive.” Distinguishing between experience and expertise is critical because we all want to understand the future and are inclined to turn to seasoned professionals with good credentials to tell us what is going to happen. The value of their predictions depends largely on the mix between skill and luck in whatever activity they’re discussing.” (p. 22)

“To visualize the mix of skill and luck we can draw a continuum. On the far right are activities that rely purely on skill and are not influenced by luck. Physical activities such as running or swimming races would be on this side, as would cognitive activities such as chess or checkers. On the far left are activities that depend on luck and involve no skill. These include the game of roulette or the lottery. Most of the interesting stuff in life happens between these extremes. To provide a sense of where some popular activities belong on this continuum, I have ranked professional sports leagues on the average results of their last five seasons (see figure 1-1)” (p. 23)

“A small number of results tell you very little about what’s going on when luck dominates, because the bell curve will look fatter for the small sample than it will for the overall population. Wainer deems this the most dangerous equation because ignorance of its lessons has misled people in a wide range of fields for a long time and has had serious consequences.” (p. 24)

“Here’s the main point: if you have an activity where the results are nearly all skill, you don’t need a large sample to draw reasonable conclusions. A world-class sprinter will beat an amateur every time, and it doesn’t take a long time to figure that out. But as you move left on the continuum between skill and luck, you need an ever-larger sample to understand the contributions of skill (the causal factors) and luck. In a game of poker, a lucky amateur may beat a pro in a few hands but the pro’s edge would become clear as they played more hands. If finding skill is like finding gold, the skill side of the continuum is like walking into Fort Knox: the gold is right there for you to see. The luck side of the continuum is similar to the tedious work of panning for gold in the American River in California; you have to do a lot of sifting if you want to find the nuggets of gold.” (p. 25)

“Most business executives try to improve the performance of their companies. One way to do that is to observe successful companies and do what they do. So it comes as no surprise that there are a large number of books based on studies of success. Each work has a similar formula: find companies that have been successful, identify what they did to achieve that success, and share those attributes with other companies seeking similar success. The approach is intuitively appealing, which explains why the authors of these studies have sold millions of books. Unfortunately, this approach comes with an inherent problem. Some of the companies were lucky, which means that there are no reliable lessons to learn from their successes. Michael Raynor and Mumtaz Ahmed at Deloitte Consulting teamed up with Andrew Henderson at the University of Texas to sort out how skill and luck contribute to the way that companies perform. First, the researchers studied over twenty thousand companies from 1965–2005 to understand the patterns of performance, including what you would expect to see as the result of luck. They concluded that there were more companies that sustained superior performance than luck alone could explain.

Next, they examined the 288 companies that were featured in thirteen popular books on high performance and tested them to see how many were truly great. Of the companies they were able to categorize, they found that fewer than 25 percent could confidently be called superior performers. Raynor, Ahmed, and Henderson write, “Our results show that it is easy to be fooled by randomness, and we suspect that a number of the firms that are identified as sustained superior performers based on 5-year or 10-year windows may be random walkers rather than the possessors of exceptional resources.” (p. 27)

“The authors of those how-to studies found success and interpreted it to create lessons that they could peddle to a credulous audience. Yet only a small percentage of the companies they identified were truly excellent. Most were simply the beneficiaries of luck. At the end of the day, the advice for management is based on little more than patterns stitched together out of chance occurrences. You have to untangle skill and luck to know what lessons you can take from history. Where skill is the dominant force, history is a useful teacher. For example, by well-established methods, you can train yourself to play music, speak a language, or compete in athletic games such as tennis and golf. Where luck is the dominant force, however, history is a poor teacher.” (p. 27)

“At the heart of making this distinction lies the issue of feedback. On the skill side of the continuum, feedback is clear and accurate, because there is a close relationship between cause and effect. Feedback on the luck side is often misleading because cause and effect are poorly correlated in the short run. Good decisions can lead to failure, and bad decisions can lead to success. Further, many of the activities that involve lots of luck have changing characteristics. The stock market is a great example. What worked in the past may not work in the future.

An understanding of where an activity is on the luck-skill continuum also allows you to estimate the likely rate of reversion to the mean. Any activity that combines skill and luck will eventually revert to the mean. This means that you should expect a result that is above or below average to be followed by one that is closer to the average.” (p. 27)

“Recall Charlie, the student who knew eighty out of one hundred facts but was tested on only twenty of them. If he scored a 90 on the first test because the teacher happened to select mostly questions he could answer, you would expect the score on the second test to be closer to 80, as his good luck would be unlikely to last.” (p. 27)

“The important point is that the expected rate of reversion to the mean is a function of the relative contributions of skill and luck to a particular event. If you’re a highly skilled NBA player making free-throw shots, your shooting percentage will stand above the average most of the time. Sometimes your performance will move back toward the average, but not by very much. If the outcome is mostly due to luck, reversion to the mean will be pronounced and quick. If you’re playing roulette and win five times, you’re better off leaving the table, because you can be sure you’re going lose as the number of plays increases. These concepts are important and are often overlooked in business, sports, and investing, not to mention in the casino.

Take another example from sports. Tennis is largely a game of skill. Top professional men players hit in excess of six hundred shots during a best-of-five set match, providing plenty of opportunity for skill to shine through (large sample). As a consequence, the ranking of the best tennis players tends to persist from year to year.” (p. 28)

“Baseball is another story. Even though its professional players are extremely skillful, baseball is a sport that involves a lot of luck. A pitcher can throw well but fail to get supporting runs from his teammates and thereby lose a game. A batter can put a ball into play and a slight difference in trajectory will determine whether it’s a hit or an out. Over a long, 162-game season, the best teams in baseball rarely win more than 60 percent of their games, as reversion to the mean powerfully drives the outcomes back toward the average. In sharp contrast to tennis, baseball has a lot of randomness. Only the New York Yankees were one of the top four teams in 2009, 2010, and 2011 (based on wins), and they made it by a slim margin in 2010. Because there are nine defensive players on the field at any given time, and each player’s performance fluctuates, one player’s skill can easily be canceled out by another’s mistake, driving the whole system back toward the average. So no matter how skillful the individual players, a system like this tends to look and behave much more like a game of chance than tennis does.” (p. 28)

“Here is how all of this relates to skill and luck: even if we acknowledge ahead of time that an event will combine skill and luck in some measure, once we know how things turned out, we have a tendency to forget about luck. We string together the events into a satisfying narrative, including a clear sense of cause and effect, and we start to believe that what happened was preordained by the existence of our own skill. There may be an evolutionary reason for this. In prehistoric times, it was probably better for survival to take the view that we have some control over events than to attribute everything to luck and give up trying.” (p. 38)

What about business books?

“The most common method for teaching a manager how to thrive in business is to find successful businesses, identify the common practices of those businesses, and recommend that the manager imitate them. Perhaps the best-known book about this method is Jim Collins’s Good to Great. Collins and his team analyzed thousands of companies and isolated eleven whose performance went from good to great. They then identified the concepts that they believed had caused those companies to improve—these include leadership, people, a fact-based approach, focus, discipline, and the use of technology—and suggested that other companies adopt the same concepts to achieve the same sort of results. This formula is intuitive, includes some great narrative, and has sold millions of books for Collins.  No one questions that Collins has good intentions. He really is trying to figure out how to help executives. And if causality were clear, this approach would work. The trouble is that the performance of a company always depends on both skill and luck, which means that a given strategy will succeed only part of the time. So attributing success to any strategy may be wrong simply because you’re sampling only the winners. The more important question is: How many of the companies that tried that strategy actually succeeded? Jerker Denrell, a professor of strategy at Oxford, calls this the undersampling of failure. He argues that one of the main ways that companies learn is by observing the performance and characteristics of successful organizations. The problem is that firms with poor performance are unlikely to survive, so they are inconspicuously absent from the group that any one person observes. Say two companies pursue the same strategy, and one succeeds because of luck while the other fails. Since we draw our sample from the outcome, not the strategy, we observe the successful company and assume that the strategy was good. In other words, we assume that the favorable outcome was the result of a skillful strategy and overlook the influence of luck. We connect cause and effect where there is no connection. We don’t observe the unsuccessful company because it no longer exists. If we had observed it, we would have seen the same strategy failing rather than succeeding and realized that copying the strategy blindly might not work.” (p. 38)

“One of the main reasons we are poor at untangling skill and luck is that we have a natural tendency to assume that success and failure are caused by skill on the one hand and a lack of skill on the other. But in activities where luck plays a role, such thinking is deeply misguided and leads to faulty conclusions.” (p. 41)

“While scientists generally believe themselves to be objective, research in psychology shows that bias is most often subconscious and nearly unavoidable. So even if a scientist believes he is behaving ethically, bias can exert a strong influence. Furthermore, a bit of research that grabs headlines can be very good for advancing an academic’s career.” (p. 43)

“Many organizations, including businesses and sports teams, try to improve their performance by hiring a star from another organization. They often pay a high price to do so. The premise is that the star has skill that is readily transferable to the new organization. But the people who do this type of hiring rarely consider the degree to which the star’s success was the result of either good luck or the structure and support of the organization where he or she worked before. Attributing success to an individual makes for good narrative, but it fails to take into account how much of the skill is unique to the star and is therefore portable.” (p. 44)

“In 2006, Trading Markets, a company that helps people trade stocks, asked ten Playboy Playmates to select five stocks each. The idea was to see if they could beat the market. The winner was Deanna Brooks, Playmate of the Month in May 1998. The stocks she picked rose 43.4 percent, trouncing the S&P 500, which gained 13.6 percent, and beating more than 90 percent of the money managers who actively try to outperform a given index. Brooks wasn’t the only one who fared well. Four of the other ten Playmates had better returns than the S&P 500 while less than a third of the active money managers did.” (p. 47)

Sample size is really important.

“Visualizing the continuum between luck and skill can help us to see where an activity lies between the two extremes, with pure luck on one side and pure skill on the other. In most cases, characterizing what’s going on at the extremes is not too hard.”

“As an example, you can’t predict the outcome of a specific fair coin toss or payoff from a slot machine. They are entirely dependent on chance. On the other hand, the fastest swimmer will almost always win the race. The outcome is determined by skill, with luck playing only a vanishingly small role (for example, the fastest swimmer could contract food poisoning in the middle of a match and lose). But the extremes on the continuum capture only a small percentage of what really goes on in the world. Most of the action is in the middle, and having a sense of where an activity lies will provide you with an important context for making decisions.” (p. 47)

“As you move from right to left on the continuum, luck exerts a larger influence. It doesn’t mean that skill doesn’t exist in those activities. It does. It means that we need a large number of observations to make sure that skill can overcome the influence of luck. So Deanna Brooks (Playboy’s Playmate of the month) would have to pick a lot more stocks and outperform the pros for a lot longer before we’d be ready to say that she is skillful at picking stocks. (The more likely outcome is that her performance would revert to the mean and look a lot more like the average of all investments.) (p. 48)

“In some endeavors, such as selling books and movies, luck plays a large role, and yet best-selling books and blockbuster movies don’t revert to the mean over time. We’ll return to that subject later to discuss why that happens. But for now we’ll stick to areas where luck does even out the results over time.” (p. 48)

“When skill dominates, a small sample is sufficient to understand what’s going on. When Roger Federer was in his prime, you could watch him play a few games of tennis and know that he was going to win against all but one or two of the top players. You didn’t have to watch a thousand games. In activities that are strongly influenced by luck, a small sample is useless and even dangerous. You’ll need a large sample to draw any reasonable conclusion about what’s going to happen next. This link between luck and the size of the sample makes complete sense, and there is a simple model that demonstrates this important lesson. Figure 3-1 shows a matrix with the continuum on the bottom and the size of the sample on the side. In order to make a sound judgment, you must choose the size of your sample with care.” (p. 48)



“We’re naturally inclined to believe that a small sample is representative of a larger sample. In other words, we expect to see what we’ve already seen. This fallacy can run in two direction. In one direction, we observe a small sample and believe, falsely, that we know what all the possibilities look like. This is the classic problem of induction, drawing general conclusions from specific observations. We saw, for instance, that small schools produced students with the highest test scores. But that didn’t mean that the size of the school had any influence on those scores. In fact, small schools also had students with the lowest scores.” (p. 48)

“In many situations we have only our observations and simply don’t know what’s possible. To put it in statistical term, we don’t know what the whole distribution looks like. The greater the influence luck has on an activity, the greater our risk of using induction of draw false conclusions. To put this another way, think of an investor who trades successfully for a hundred days using a particular strategy. He will be tempted to believe that he has a fail-safe way to make money. But when the conditions of the market change, his profits will turn to losses. A small number of observations fails to reveal all of the characteristics of the market” (p. 49)

“We can err in the opposite direction as well, unconsciously assuming that there is some sort of cosmic justice, or a scorekeeper in the sky who will make things even out in the end. This is knows as gambler’s fallacy. Say you’re watching a coin being tossed. Heads comes up three times in a row. What do you think the next toss will show? Most people will say tails. If feels as if tails is overdue. But it’s not. There is a a 50-50 chance of both heads and tails on every toss, and one flip has no influence on any other. But if you toss the coin a million times, you will, in fast, see about half a million heads and half a million tails. Conversely, in the universe of the possible, you might see heads come up a hundred times in a row if you toss the coin long enough.”

“When you’re attempting to select the correct size of a sample to analyze, it’s natural to assume that the more you allow time to pass, the larger your sample will be. But the relationship between the two is much more complicated than that. In some instances, a short amount of time is sufficient to gather a relatively large sample, while in other cases a lot of time can pass and the sample will remain small. You should consider time as independent from the size of the sample.” (p. 51)

“This idea also serves as the basis for what I call the paradox of skill. As skill improves, performance becomes more consistent, and therefore luck becomes more important. Stephen Jay Gould, a renowned paleontologist at Harvard, developed this argument to explain why no baseball player in the major leagues has had a batting average of .400 or more for a full season since Ted Williams hit .406 in 1941 while playing for the Boston Red Sox.” (p. 53)

“You can readily see how the paradox of skill applies to other competitive activities. A company can improve its absolute performance, for example, but will remain at a competitive parity if its rivals do the same. Or if stocks are priced efficiently in the market, luck will determine whether an investor correctly anticipates the next price move up or down. When everyone in business, sports, and investing copies the best practices of others, luck plays a greater role in how well they do.” (p. 56)

“In other words, if everyone gets better at something, luck plays a more important role in determining who wins.” (p. 58)

“The basic argument is easy to summarize: great success combines skill with a lot of luck. You can’t get there by relying on either skill or luck alone, You need both.” (p. 58)

“Naturally. this principle applies well beyond baseball. In other sports, as well as the worlds of business and investing, long winning streaks always meld skill and luck. Luck does generate streaks by itself, and it’s easy to confuse streaks due solely to luck with streaks that combine skill and luck. But when there are differences in the level of skill in a field, the long winning streaks go to the most skillful players.” (p. 60)

“The position of the activity on the continuum defines how rapidly your score goes toward an average value, that is, the rate of reversion to the mean. Say, for example, that an activity relies entirely on skill and involves no luck. That means the number you draw for skill will always be added to zero, which represents luck. So each score will simply be your skill. Since the value doesn’t change, there is no reversion to the mean. Marion Tinsley, the greatest player of checkers, could win all day long, and luck played no part in it. He was simply better than everyone else.

Now assume that the jar representing skill is filled with zeros, and that your score is determined solely by luck; that is, the outcomes will be dictated solely by luck and the expected value of every incremental draw for skill will be the same: zero. So every subsequent outcome has an expected value that represents complete reversion to the mean. In activities that are all skill, there is no reversion to the mean. So if you can place an activity on the luck-skill continuum, you have a sound starting point for anticipating the rate of reversion to the mean.” (p. 61)

In real life, we don’t know for sure how skill and luck contribute to the results when we make decisions. We can only observe what happens. But we can be more formal in specifying the rate of reversion to the mean by introducing the James-Stein estimator with a focus on what is called the shrinking factor. This construct is easiest to understand by using a concrete example. Say you have a baseball player, Joe, who hits .350 for a part of the season, when the average of all players is .265. You don’t really believe that Joe will average .350 forever because even if he’s an above average hitter, he’s likely been the beneficiary of good luck recently. You’d like to know what his average will look like over a longer period of time. The best way to estimate that is to reduce his average so that it is closer to .265. The James-Stein estimator includes a factor that tells you how much you need to shrink the .350 while Joe’s average is high so that his numbers more closely resembles his true ability in the long run. Let’s go straight to the equation to see how it works:

Estimated true average= Grand average + shrinking factor (observed average – grand average)

The estimated true average would represent Joe’s true ability. The grand average is the average of all of the players (.265), and the observed average is Joe’s average during his period of success (.350). In a classic article on this topic, two statisticians named Bradley Efron and Carl Morris estimated the shrinking factor for batting averages to be approximately .2. (They used data on batting averages from the 1970 season with a relatively small sample, so consider this as illustrative and not definitive.)

Here is how Joe’s average looks using the James-Stein estimator;

Estimated true average = .265 + .2 (.350-.265)

According to this calculation, Joe is most likely going to be batting .282 for most of the season.

“For activities that are all skill, the shrinking factor is 1.0, which means that the best estimate of the next outcome is the prior outcome. When Marion Tinsley was playing checkers, the best guess about who would win the next game was Marion Tinsley. If you assume that skill is stable in the short term and that luck is not a factor, this is the exact outcome that you would expect. For activities that are all luck, the shrinking factor is 0, which means that the expected value of the next outcome is the mean of the distribution of luck. In most American casinos, the mean distribution of luck in the game of roulette is 5.26 percent, the house edge, and no amount of skill can change that. You may win a lot for a while or lose a lot for a while, but if you play long enough, you will lose 5.26 percent of your money. If skill and luck play an equal role, then the shrinking factor is 0.5, halfway between the two. So we can assign a shrinking factor to a given activity according to where that activity lies on the continuum. The closer the activity is to all skill, the closer the factor is to 1. The larger the role that luck plays, the closer the factor is to zero.” (p. 63)

“The James-Stein estimator can be useful in predicting the outcome of any activity that combines skill and luck. To use one example, the return on invested capital for companies reverts to the mean over time. In this case, the rate of reversion to the mean reflects a combination of a company’s competitive position and its industry. Generally speaking, companies that deal in technology (and companies whose products have short life cycles) tend to revert more rapidly to the mean than established companies with stable demand for their well-known consumer products. So Seagate Technology, a maker of hard drives for computers, will experience more rapid reversion to the mean than Procter & Gamble, the maker of the best-selling detergent, Tide, because Seagate has to constantly innovate, and even its winning products have a short shelf life. Put another way, companies that deal in technology have a shrinking factor that is closer to zero.

Similarly, investing is a very competitive activity, and luck weighs heavily on the outcomes in the short term. So if you are using a money manager’s past returns to anticipate her future results, a low shrinkage factor is appropriate. Past performance is no guarantee of future results because there is too much luck involved in investing. Understanding the rate of reversion to the mean is essential for good forecasting. The continuum of luck and skill, as our experience with the two jars has shown, provides a practical way to think about that rate and ultimately to measure it.” (p. 64)

“Rowing is at one extreme of the continuum between luck and skill, which is why wishing the competitors good luck makes little sense. Slot machines are at the other extreme, so the idea of a system based on skill that will allow a gambler to beat slots over times is far-fetched. But most activities combine luck and skill. The extent to which the two factors contribute to outcomes is the essential issue.

To address that issue, we have to be able to place activities somewhere along the continuum that will represent the true mix of luck and skill. We’ll have to consider what unit of analysis to use, what size the sample should be, and how time influences the activity in question. We can analyze activities at different levels, and the levels may represent different mixes of luck and skill.” (p. 67)

“When an activity is mostly skill, we need not worry much about the size of the sample unless the level of skill is changing quickly. For activities with a good dose of luck, skill is very difficult to detect with small samples. As the sample increases in size, the influence of skill becomes clearer. So you can actually place the same activity at different points along the continuum based on the size of the sample alone. Larger samples do a better job of revealing the true contributions of skill and luck.

“First, ask if you can easily assign a cause to the effect you see. In some activities, the relationship of cause and effect is clear. You can repeat the behavior and get the same result. These are activities that are generally stable and linear. Stable means that the basic structure of the activity doesn’t change over time, and linear means that a particular action leads to the same reaction every time. If you can easily identify the cause of a given effect, you’re most likely on the skill side of the continuum. If it’s hard to tell, you’re on the luck side. Here’s an example: As an amateur tennis player, you decide that if you simply look at the ball all the time when you’re trying to return it, you’ll be more successful. You keep track and find that, indeed, you’re returning a lot more balls when you keep your eye on the ball. Conclusion: It’s not luck. You’re really improving your skill. Another example: You wear your Lucky Hat every time you go to the casino to play roulette. You win between $50 and $100 the first three times you do it. Never again will you gamble without your Lucky Hat. The trend holds for another few visits to the casino—wear the hat and win. But then one blustery Saturday, your Lucky Hat blows off your head and lands in the river, never to be seen again. That night you win $1,000 at roulette. The next weekend you’re so stoked about not wearing your Lucky Hat that you bet heavily and lose $2,000. You get the idea. Hard to tell cause and effect here. You’re way on the luck side of the continuum. Take a more complex example. Consider two elements of a manufacturing business. The first is the actual manufacturing process. World-class manufacturers develop very clear processes that are highly repeatable and have very low error rates. There is a rich literature that applies statistical methods to manufacturing, with a goal of reducing costs.5 One well-known case in point is the Six Sigma method, designed to reduce variation in production. A company that achieves Six Sigma ability will have fewer than 3.4 defects for every million units of goods or services. General Electric and Honeywell, among others, have saved billions of dollars by implementing the method. Manufacturing is an activity that falls near the all-skill side of the continuum. A proper process using statistical control yields a favorable outcome a very high percentage of the time.6 The second element of a manufacturing business is simply deciding which products to manufacture. We call this strategy, and even a well-conceived strategy can fail catastrophically, as we saw with the Sony MiniDisc, because success is not a linear process. It depends on a large number of factors, including competitors, technological developments, regulatory changes, general economic conditions, and the preferences of fickle customers, to name just a few. Although better strategies will lead to more successes over time, a good process provides no guarantee of a good outcome. So even within the same company, some activities will rely mostly on skill and others will depend a great deal on luck.” (p. 70)

“As a side note, as individuals advance in their careers, their duties often slide toward the luck side of the continuum. The tools that made an executive very successful as the head of manufacturing may be of little use when he is promoted to CEO, a position in which it’s much more difficult to find causes for specific effects. The nature of feedback changes, too, which is also challenging. In activities strongly influenced by skill, feedback is generally clear. When luck stands between cause and effect, giving and receiving quality feedback becomes much more difficult.” (p. 72)

“What is the rate of reversion to the mean? To answer this question you need some way to measure performance. You can, for example, tally up a sports team’s wins and losses. You can record a company’s profit or an investment manager’s success at beating a benchmark such as the S and P 500. In each of these cases, you can calculate the results and get a good sense of of how quickly they are moving toward the average. Slow reversion is consistent with activities dominated by skill, while rapid reversion comes from luck being the more dominant influence.” (p. 72)

“The third and final question is: Where can we predict well? In other words, where are experts useful? Answering this question requires examining and assessing the track record of expert predictions. When the predictions of experts tend to be uniform and accurate, skill is the driving factor. When experts have wide disagreement and predict poorly, lots of luck is generally involved. Areas that have high predictability include engineering, some areas of medicine, and games such as chess and checkers. For instance, tournament chess players earn a rating based on how much they win or tie and what the opposing player’s rating was at the time of the game. If you’re rated two hundred points higher than your opponent, you’ll be expected to win 75 percent of the time. If you win, your rating will go up a small amount. If you lose, your rating will drop by much more. Therefore, your rating is a reliable predictor of how well you’ll perform, even though your skill is constantly changing. Experts are notoriously poor at predicting the outcomes of political, social, and economic systems. Researchers documented that fact decades ago. But what’s surprising is not their abysmal record of prediction but rather that society continues to believe them. The reason the experts are so hopelessly lost is that political, social, and economic systems are complex adaptive systems. The results you see, such as booms and busts in the stock market, emerge from the interaction of lots of individual agents. Complex adaptive systems effectively obscure cause and effect. You can’t make predictions in any but the broadest and vaguest terms.” (p. 72)

“If luck contributes 48 percent to the game of football, then the models used to predict scores should be accurate about 75 percent of the time, according to Brian Burke. This is consistent with the performance of various computer models and oddsmakers. Sports provide particularly convenient examples for this kind of analysis because the games have binary outcomes, a win or a loss. We can easily envision the distributions at the continuum’s extremities. In most other contexts, such as business and investing, we don’t know what the extremes look like. But this method does reinforce the essential idea that you always need to consider a null model—the simplest model that might explain the outcomes—when assessing results. In many cases, the basic question is whether luck is sufficient to explain results.” (p. 78)

“It is also true that the number of opportunities a team has to score will greatly determine the influence of luck on the outcome of the game. Basketball players take possession of the ball at eight or nine times the rate of football players. The more chances they have to score, the more influence skill has.” (p. 80)

“Investing is another endeavor where we could benefit from teasing apart skill and luck. We can define skill as the ability to take actions that will predictable generate a risk-adjusted return in excess of an appropriate benchmark, such as the S and P 500, over time. It is impossible for investment managers to generate returns in excess of the benchmark in the aggregate. the reason is that the market’s return must simply be the sum total of the results of the managers (or close to it). Since managers charge fees for their services, the return to investors is less than that of the market.


Researchers studying the investment industry have answered each of the three questions from the first method of placing activities on the continuum between luck and skill. Prices in markets reflect the interaction of lots of investors, and we know that identifying a cause for any given effect in these kinds of systems is notoriously difficult. Booms and crashes have been consistent features of markets for centuries, and there is no simple way to anticipate the behavior of markets in the short-term.” (p. 87)

Reversion to the mean is a powerful force in investing, too. John Bogle, a luminary of the investment industry, illustrates this by ranking mutual funds in four groups base on results in the 1990s and seeing how those groups performed in the 2000s. The group that was most successful, the top fourth of the mutual funds, handily outpaced the average fund in the 1990’s. But it suffered a 7.8 drop in relative performance since then. The bottom fourth in the 1990s showed a sharp 7.8 percent relative gain in the 2000s. This powerful and symmetrical reversion to the mean suggests that investing involves a large dose of luck.”

Importantly, reversion to the mean in the investment business extends its influence well beyond the realm of mutual funds. It exerts its power over companies with small capitalization as well as large capitalization, over value and growth investing, over gonds as well as stocks; and it spans geographic boundaries. There are few corners on the investment business where reversion to the mean does not hold sway.” (p. 88)

“The paradox of skill is an effective way to explain why markets are so hard to beat consistently. In 1975, Charles Ellis, the founder of the consulting firm Greenwich Associates, wrote an essay called “The Loser’s Game.” In it, he noted: “Gifted, determined, ambitious professionals have come into investment management in such large numbers during the past 30 years that it may no longer be feasible for any of them to profit from the errors of all the others sufficiently often and by sufficient magnitude to beat the market averages.” Over those decades, investing went from being dominated by individuals to being dominated by institutions. As the population of skilled investors increased, the variation in skill narrowed, and luck became more important.” (p. 88)

“There is a big difference between saying that the short-term results of investment managers are mostly luck and saying they are all luck. Research shows that most active managers generate returns above their benchmark on a gross basis, but that those excess returns are offset by fees, leaving investors with net returns below those of the benchmark. Considering the evidence on balance, it is reasonable to conclude that there is evidence of skill in investing. However, only a small percentage of investors possess enough skill to offset fees. As a result, investing, especially over relatively short periods of time, is more a matter of luck than of skill.” (p. 90)

“Figure 4-9 provides an estimate of where a handful of activities lie on the continuum. While we can never place an activity with pinpoint precision, the qualitative and quantitative methods in this chapter provide useful guidelines. It is essential to emphasize that it is not where activities lie per se that is important but rather what that position means for helping us to make decisions. A common mistake is to use a process for making a decision that is appropriate for activities that are nearly all skill and then apply it to an activity that is mostly luck.” (p. 90)

“Trouble arises when individuals rely too heavily on their experience in making automatic decisions. When we age, we tend to avoid exerting too much cognitive effort and deliberating extensively over a decision that needs to be made. We gradually come to rely more on rules of thumb. This means that we make poorer choices in environments that are complex and unstable.12 Business and investing are examples of realms where intuition often fails. Researchers who studied people making investments found that decisions about those investments grew less wise as people aged. In other words, skill declines with age.” (p. 97)

“Consider the case of two graduate students of equal ability applying for a faculty position. Say that by chance one is hired by an Ivy League university and the other gets a job at a less prestigious college. The professor at the Ivy League school may find herself with better graduate students, less teaching responsibility, superior faculty peers, and more money for research than her peer. Those advantages would lead to more academic papers, citations, and professional accolades. This accumulated edge would suggest that, at the time of retirement, one professor was more capable than the other. But the Matthew effect explains how two people can start in nearly the same place and end up worlds apart. In these kinds of systems, initial conditions matter. And as time goes on, they matter more and more.” (p. 118)

“In 1981 the late Sherwin Rosen, who was an economist at the University of Chicago, wrote a very influential paper called “The Economics of Superstars.” He observed that a few superstars—“performers of first rank”—earn incomes that are vastly larger than performers with only modestly less ability. While fans may prefer the superstars to lesser performers, he argued that the difference in skill is too modest to explain the sizeable gap in pay. He suggested that technology is the primary factor that causes the phenomenon. Imagine two singers of similar ability, with one being only slightly better than the other. In the era before recording technology was developed, the singers would have earned a comparable sum from their concerts, with the superior singer perhaps earning a modest premium consistent with the difference in skill. But once recording technology was introduced, consumers would no longer have to settle for the lesser of the two and would buy the record of the better singer almost every time. So her earnings would soar relative to her rival. Despite the similarity of talent, this becomes a winner-take-all market. In their book The Winner-Take-All Society, Robert Frank and Philip Cook suggest that increased competition for talent is another factor that creates outsized pay for top performers. Frank offers the example of a board of directors that must select between two candidates to become the next CEO of a company that earns $10 billion a year in profits. If one candidate can make better decisions than the other, Frank argues, the company’s profits may be 3 percent higher than they would be otherwise, creating an additional $300 million in income. So even a modest difference in the abilities of the CEO candidates is worth a difference in pay that seems enormous to the rest of us. Moreover, companies today are more willing to hire a CEO from outside the company than they were a few years ago. This mobility has made CEOs even more valuable, just as free agency has increased salaries for baseball players.” (p. 123)

“We have seen that path dependence and social interaction lead to inequality. Technology and competition also contribute to this phenomenon. But there is a crucial assumption underlying all of these models of superstardom: that we know exactly who is most skillful. That assumption, as we will see, is false. Social influence leads not just to inequality, but to a fundamental lack of predictability as well. More skill gives people an edge in attaining success, but like the red marbles that started out in the majority, that edge offers no assurance that they will end up on top.” (p. 125)

“Consider the analysis that showed that pay for CEOs is consistent with market capitalization. The same researchers tried to find differences in the skill of CEOs in the largest companies. They couldn’t find much, if any. For example, their model suggests that replacing the CEO of the 250th-largest company in the United States with the CEO of the largest company, at the pay of the smaller company, would increase the market value of the smaller company by 0.016 percent. The decimal point isn’t misplaced: that’s basically zero.” (p. 125)

“The other subjects were divided into eight groups, with 10 percent of the population in each. They were free to rate and download songs, but they could also see how many people had downloaded each song before them. One version of the experiment enhanced the social effect by showing the download counts in descending order. In effect, the eight social worlds were parallel universes. They all started with the same initial condition but were left to go in any direction that social influence took them. Salganik, Dodds, and Watts found that quality did matter. Songs that were ranked as inferior by the independent group tended to perform poorly before the other groups as well. Likewise, songs that the independent group ranked high were among the most popular in the other groups. But in the groups where social influence was at work, there was also substantial inequality. The market shares of the best songs were much higher in the social worlds than in the independent world. This is all consistent with the research on inequality done by Sherwin Rosen, Robert Frank, and Philip Cook. In what was perhaps the most significant finding, MusicLab showed that while quality is roughly correlated with commercial success, there is little predictability with hits. Really bad songs did poorly, but an average to above-average ranking in the independent condition made a song a contender to be a smashing success. For example, “Lockdown,” a song by a group called 52metro, ranked twenty-sixth in the independent condition, right in the middle of the pack. But it was the number-one hit in one of the social influence worlds and number forty in another. As one of the researchers, Matthew Salganik, notes, “It’s as if luck is more important for good songs than for bad songs.” The beauty of MusicLab was that it allowed the researchers to separate these important effects. Comparing the independent world to the social world revealed exactly how social interaction created the environment in which inequality could emerge. And by running several social worlds at once, the experiment showed how hard it is to predict which songs would succeed. While its design was simple, MusicLab demonstrates exactly why we are so limited in our ability to pick hits.” (p. 129)

“At this point, you probably accept the intellectual case that it is possible for skill, or quality, to play only a minor role in commercial success. But, if you are like me, you have a hard time accepting that there isn’t something just a little special about The DaVinci Code, Titanic, or the Mona Lisa.  The very fact that they are so wildly popular seems to be all the evidence you need to conclude that they have some special qualities that make them stand out above all the rest. But all three were surprises. Our minds are expert at wiping out surprises and creating order, and order dictates that these products are special. We are very good at fooling ourselves about our own success, a phenomenon that psychologists call the self-serving attribution bias. It is common for us to attribute success to our own terrific skill, even in endeavors that are determined mostly by luck. Part of the explanation is that we see ourselves as capable agents. We can do things. We can make things happen. So we assume that our skill caused the success we experience. On the other hand, we readily attribute failure to external causes, including bad luck.” (p. 130)

“The success of a company is rarely owing to the efforts of one person. It is typically the result of the work of a large number of people, as well as the environment in which they operate. Still, we tend to assign credit to individual people for collective success.” (p. 131)

“This analysis highlights one of the key themes of Moneyball, the best-selling book by Michael Lewis that describes how the Oakland A’s built a winning baseball team on the cheap by finding players whose skills were underpriced. The common way to assess players, Lewis wrote, was to look at their five tools: the ability to run, throw, field, hit, and hit with power. When managers “talked about scoring runs, they tended to focus on team batting average.” The A’s realized that the percentage of times a player got on base was much better at predicting how many runs a player would score and that “a player’s ability to get on base—especially when he got on base in unspectacular ways—tended to be dramatically underpriced in relation to other abilities.” (p. 140)

“A glance at figure 7-5 shows why the A’s approach could work. The coefficient of correlation for on-base percentage, .44, is higher than that for batting average at .37. The higher level of persistence tells us that the number of times a player gets on base will tell us more about his skill than his batting average will. A look at the value of this statistic in predicting what will happen tells an even clearer story. The number of times players get on base has a .92 correlation with the number of runs the team will score, making batting average look relatively poor by comparison.” (p. 141)

“The business of investing is filled with institutional practitioners who are smart and motivated. As we have already seen, the industry is highly competitive, and reversion to the mean is a strong force. That it is difficult for fund managers to deliver returns in excess of the market, adjusted for risk, is a testament to the efficiency of the market. The idea behind efficiency is that the prices of assets reveal all known information. This is not strictly true, and markets are notorious for going to extremes. But only a small percentage of investors have shown an ability to systematically beat the market over time. It’s not that investors lack skill, it’s the paradox of skill: as investors have become more sophisticated and the dissemination of information has gotten cheaper and quicker over time, the variation in skill has narrowed and luck has become more important.” (p. 146)

“The approach you take to developing skill depends on where an activity lies on the luck-skill continuum. For activities that take place in environments that are stable and in which luck plays a small role, deliberate practice improves skill. Under those conditions, people can develop true expertise. For example, if you practice playing the violin, the music you produce will sound consistently better over time. If you want to learn to type well, you can follow a formula, set aside time each day to practice, and actually see your mistakes disappear on the page. You get instant and reliable feedback, and the more you practice, the faster you get and the fewer mistakes you make. When activities are more influenced by luck, you won’t get that kind of feedback, at least in the short term. What you do is not connected strongly to the result. So the best approach is to focus on the process you’re using. If you practice playing poker, as you develop your skill, what you win will still fluctuate, because the game is partly determined by luck. But as you gain expertise, you are more likely to win over time. Most jobs have elements that combine tasks that are familiar with those that are unfamiliar. In those situations, checklists can help a lot in improving your skill. In most cases, checklists don’t contain anything that you don’t already know. They just ensure that you actually perform all the tasks you’re supposed to perform. Furthermore, whenever there are distractions, checklists help direct and manage your attention. It’s surprising how many fields there are in which checklists could help but are not used.” (p. 156)

“Checklists: A Structured Way to Manage Attention Most jobs combine tasks that are procedural with tasks or situations that are novel. Medicine is a good example. A doctor may use a set of guidelines to prepare a patient for surgery, and then proceed without knowing what complications he or she may face during the operation. The problem is that most doctors allocate the bulk of their attention to the surgery itself and pay less attention to the procedural part of their job. As a result, they sometimes improperly handle the part of the treatment controlled by a set of rules and thereby put the patient’s health at risk. The problem is not that the doctors don’t know how to do the procedural tasks; it’s that their attention is elsewhere.” (p. 163)

“Most of us don’t spend time dwelling on our errors. But if we did, we could create checklists that would eliminate those errors. To adopt a checklist is to embrace humility and admit our own fallibility. None of us can flawlessly cope with a complex world. A grocery list is a checklist that we use in a very low-risk environment. Why not use one when the stakes are high as well?” (p. 167)

“If an activity involves luck, then how well you do in the short run doesn’t tell you much about your skill, because you can do everything right and still fail, or you can do everything wrong and succeed. For activities near the luck side of the continuum, a good process is the surest path to success in the long run.” (p. 176)

“So I took matters into my own hands and set up a tournament. I used a service from Amazon.com called Mechanical Turk. The site allows you to offer micropayments to people willing to complete a “human intelligence task,” often a question that needs an answer. I asked my editor for her favorite seven titles and added Think Twice, paired them randomly, and offered turkers $0.10 to “select the best title for a book.” The titles that won each round moved on to the next round, just as in a sports tournament. In the end, Think Twice prevailed (otherwise, I wouldn’t be telling this story), followed by Perfectly Preventable Errors, Ways of Our Errors, and Counter Your Intuition. Hundreds of people from around the world participated in the tournament by voting, and the whole project cost only a couple of hundred dollars. The point of the story is that we can do a better job of figuring out cause and effect than we do.15 The basic idea is very simple. Let’s say you want to know whether an advertising campaign is effective. You run the ad for a selected group, called the experimental group. You don’t run the ad for a statistically similar group, called the control group. You then compare the purchases the members of the two groups make. If the experimental group bought a sufficiently different amount of the product you advertised than the control group did, you have a reason to believe that the advertising caused the difference. You can run these experiments on a small scale so that failure is not too costly, and then increase the size of the bets only when an advertisement has proved that it can sell your product.” (p. 187)

“By definition, luck is something that no one can control. But there are ways that you can manage it more effectively. The main lesson from Colonel Blotto is that in competitive interactions, the strong should seek to simplify to emphasize their advantage in skill and the weak should try to add randomness to dilute the stronger player’s advantage. This approach has proved useful in sports, business, and war, and yet many people fail to use it because of tradition, a lack of awareness, or because they are afraid of damaging their careers by doing something different and then failing.” (p. 195)

“This illusion often confuses doctors. In clinical practice, they commonly measure weight, cholesterol concentration, and blood pressure to see if you have a disease or the risk factors associated with it. Doctors are likely to treat you if they find an extreme value for any of those variables—high blood pressure, for instance. They’ll give you a drug to try to bring your blood pressure closer to the mean. Here again, we know that the entire population of people with high blood pressure during their first visit will see on average a moderation in blood pressure on their second visit, whether individual people are treated or not. Because of errors in measurement and biological variation, the correlation between two blood pressure tests for the same person is not perfect. So there will be reversion to the mean no matter what the treatment. Naturally, the tendency is to assume that the treatment worked and caused the reduction in blood pressure, and in some cases this might be true in part. But the illusion of feedback will persuasively suggest that the treatment was the cause and lower blood pressure was the effect.” (p. 202)

“We are now ready to bring together a few of the ideas we’ve been considering so that we can see how to make practical use of reversion to the mean. The first idea comes from the paper, “On the Psychology of Prediction,” published in 1973 by Daniel Kahneman and Amos Tversky. They said that there are three types of information that are relevant for a statistical prediction: the prior information, or base rate; the specific evidence about the individual case; and the expected accuracy of the prediction. The trick is determining how to weight the information.9 To answer that question, we can examine the idea of persistence, which we measure through a coefficient of correlation. High correlations are generally consistent with skill and allow for more accurate predictions. Low correlations are indicative of more luck and make specific predictions less accurate. Recall that the most useful statistics are persistent and predictive. That means the next outcome looks like (is highly correlated with) the previous one. And it often means that you can control what happens next (high correlation between what you want and what you get through your effort). Combining the ideas of weighting the information (How much does it count?) and persistence (Will you see the same results again?) gives you specific guidance in judging what the next result is likely to be. In cases where the correlations are low, reversion to the mean is very powerful. Indeed, the best estimate of the next outcome is in many cases the base rate, which is simply the average of the distribution. Yet this is not how most people make their decisions. Rather than assuming strong reversion, they generally act as if good news predicts good news and bad news predicts bad.” (p. 206)

“The next piece of specific guidance is simpler. When correlations are high between the action you take and the result you get, you need not revert very much, and you can rely more on the specific evidence about the individual case. The best estimate of the next outcome is the current outcome. This works when skill is the primary influence at work. Tennis matches and foot races are examples. And, as we saw with batters in baseball, some outcomes within an activity may be easier to predict than others. For baseball players, strikeout rate (strikeouts divided by plate appearances) reflects the interaction solely between the pitcher and hitter and is mostly determined by skill. As a result, it has a high correlation between cause and effect. Many more factors affect batting average (hits divided by at-bats), including weather, fielding, and minute differences in how the bat hits the ball. Batting average has a much lower correlation with skill than strikeout rate, and is therefore less predictable.” (p. 207)

“The location of an activity on the continuum provides guidance on how much reversion to the mean is necessary in making your predictions. High correlations imply limited reversion to the mean; the best estimate for the next outcome is something close to the previous one. Low correlations require substantial reversion to the mean, and the most logical guess for the next outcome is the average. Psychologists have demonstrated that we typically fail to regress to the mean as much as we should. Finally, the luck-skill continuum will give you some sense of when you are most likely to get fooled by randomness. The basic challenge is that our minds naturally assign causes to all that we see, whether an event is the result of skill or luck. A positive outcome on the skill side of the continuum should clearly be chalked up to good skill, but our minds are lazy enough to attribute a good result on the luck side to skill as well. (This generally doesn’t apply to pure luck activities, including the lottery, although even there you hear explanations using causal attribution.) A good example is an investor who succeeds in the short run in spite of a poor investment process. The success itself will look like skill to the investor and plenty of others. Whenever randomness explains a result as well as or better than skill does, we are at risk of being fooled.” (p. 216)

“One way to avoid hindsight bias is to engage in counterfactual thinking, a careful consideration of what could have happened but didn’t. If we accept that x played a role in causing y, then we have to consider how events would have unfolded had x not happened. History is largely a narrative of cause and effect. After the fact, those connections appear as hardened fact, so we have to make a distinct effort to consider how things could have turned out differently. As Philip Tetlock and his colleagues argue, it is healthy to maintain some equilibrium “between factual and counterfactual methods of framing our questions about what had to be and what could have been.”” (p. 225)

“Finally, organizations that are serious about improving their performance must honestly and precisely measure how well their actions turn out.  Measurement provides the basis for the feedback that allows for the continual improvement of skill.

Good coaches are valuable because of their deep knowledge of the skill and an outsider’s ability to see and critique a student’s performance…It can seem almost demeaning to ask for advice.  Even so we can all benefit from coaching, no matter how good we think we are.



Being truly open to feedback is difficult because it implies change-something we would prefer to avoid.  One simple and inexpensive technique for getting feedback is to keep a journal that tracks your decisions.  Then, when the results of that decision are clear, write them down and compare them with what you thought would happen.  The journal won’t lie.  You’ll see when you’re wrong.  Chang your behavior accordingly.

“Perhaps the most important idea is that the rate of reversion to the mean related to the coefficient of correlation.  If the correlation between two variables is 1.0, there is no reversion to the mean.  If the correlation is 0, the best guess about what the next outcome will be is simply the average.  In other words, when there’s no correlation between what you do and what happens, you’ll see total reversion to the mean.  That’s why there’s always a small expected loss when you play roulette, whether you’ve just lost or won chips.  Simply having a sense of correlations for various events can help guide us in making predictions.”

“On-base percentage correlates strongly with how many runs the team scores.  That percentage is also a reasonable persistent statistic, since it is a good measure of the skill of hitters.  In contrast, batting average has a weaker correlation with how many runs a team scores.  It other words, it’s less predictive.  And since it reflects more luck, it is also less persistent.  We we can conclude that a team’s on base percentage is the better statistic to use in prediction how a team’s offense is going to do.  You don’t have to do any calculations to grasp this idea.”

“seems like common sense, right?  Yet it’s shocking how often companies use statistics that have nothing to do with their own strategy or even the broader goal of making money.  Furthermore, companies use those statistics to determine how much money their executives make.  The old saying is, “What gets measured, gets managed.”  If we measure the wrong things, we will not achieve our goals.”

http://tamilkamaverisex.com
czech girl belle claire fucked in exchange for a few bucks. indian sex stories
cerita sex