Our brains can make errors in projects,

because our decisions may be biased by the following…

Cognitive Biases in Planning and Project Management

  • Ambiguity (Uncertainty) Aversion

    Ambiguity aversion, or uncertainty aversion, is the tendency to prefer the known over the unknown. It’s especially applicable in risk identification where people may prefer known risks over unknown risks, and may cause attention to be focused away from considering unknowns in a project.

    Ambiguity is a type of uncertainty. It is common to distinguish between three degrees of uncertainty: Ignorance, which implies no knowledge at all, risk in which uncertainty is expressed by an exact numeral, and ambiguity, which marks a condition in between the former two (Keren & Gerritsen, 1999).

    Ambiguity aversion is a preference for known probabilities (risk) over unknown probabilities (uncertainty), or in other words, the desire to avoid unclear circumstances, even when this will not increase expected utility. Low ambiguity outcomes cover a range of situations: (a) a constant act that results in the same outcome in every situation, (b) a constant act that results in an outcome with the same probability (risk) in every situation, or (c) a constant act that results in an outcome in every situation that is associated with familiar sources of uncertainty (Blavatskyy, 2012). The latter is termed source preference, referring to the fact that not only the degree of uncertainty matters, but the source as well (Abdellaoui, Baillon, Placido, & Wakker, 2011; Tversky & Fox, 1995).

    An illustration of source preference is the classical two-color paradox by Ellsberg (1961): one urn (or source) contains fifty black balls and fifty red balls (the known urn), while another urn contains a hundred red and black balls in an unknown proportion (the unknown urn). Ellsberg found participants willing to exchange bets both within the known and the unknown urn (red for black or vice versa). However, participants were not willing to exchange a bet from the known urn to the unknown urn.

    The willingness to exchange within, but not between urns, suggests that people distinguish between different sources of uncertainty. In testing the two-color paradox with the general audience (N = 1935), Dimmock, Kouwenberg, & Wakker (2012) found the majority of people to be ambiguity averse for events with moderate and high likelihood. However, for events with low likelihood, the majority of people were ambiguity-seeking. This is inconsistent with the assumption of universal ambiguity aversion, but has previously been found in laboratory settings as well (Wakker, 2010).

    Ambiguity aversion has been adopted to explain, among others, limited participation in the stock market (Cao, Wang, & Zhang, 2005; Easley & O’Hara, 2009), increased tax compliance when the uncertainty of the probability of being audited rises (Snow & Warren, 2005), and the preference for established brands over new ones (Muthukrishnan, Wathieu, & Xu, 2009).

  • Anchoring Effect

    The anchoring and adjustment effect is a cognitive bias where people begin with a suggested reference value which later becomes an anchor against which future estimates are based. In other words, if you hear the number 10, for example, and then you are asked to estimate an activity duration, your brain will subconsciously reference that anchor of 10 which may cause you to estimate the activity closer to 10 days. Varied beginning points generate altered estimations with a bias toward the original values (Tversky & Kahneman, 1974). The anchoring effect was first studied in the 1970’s (Tversky & Kahneman, 1974). Numerous researchers have studied this cognitive bias, from experiments with product purchase prices (Dodonova, 2009) to guesses on the number of physicians in a given geographical area (Wilson, Houston, Etling, & Brekke, 1996).

    One behavioral experiment asked people to write down the last three digits of their phone number and multiply by one thousand (for example, 678 = 678,000). The results indicated that people’s subsequent estimation of house prices were influenced by the phone number anchor.

    Anchoring can occur in a myriad of ways. For example, each proceeding number used in estimation will be compared against the initial value or number referenced, and can bias judgment toward clustering around the initial value. Experimental results indicate the anchoring effect may occur when there is no logical reason to consider the number. Results showed neither offering participants an incentive to be accurate or warning participants in advance about the anchoring bias eliminated the effects (Epley & Gilovich, 2006; Wilson et al., 1996).

    The effects of anchoring were reviewed in one study of 40 years of literature on the bias (Furnham & Boo, 2011). The study showed anchoring to be relevant, but not as impactful when extreme values are used to anchor the subjects in experiments. In cases of extreme values, the effects are not as prevalent as when moderate anchors are used (Wegener, Petty, Detweiler-Bedell, & Jarvis, 2001).

    In project management, research has shown the way activity estimation questions are asked may change the outcome and accuracy of the estimate. Inducing an anchor value changes the estimate output based on how one frames the question (Jorgensen, 2004). Anchoring was a crucial component in the experiment, where framing the question in different ways on a set of anchors was found to change the accuracy of the overall estimate.

    Further support for anchoring was shown in research on question framing for estimating effort in project activities. Results indicated providing an initial time frame for activity estimation resulted in the creation of an anchor around that time frame, and thus an underestimation of effort. Removing the anchor by instead asking how much effort was required to accomplish an activity resulted in less underestimation of effort (Jørgensen & Halkjelsvik, 2010).

    An underestimation of effort may be a significant problem for projects, as projects rely on estimates of activity durations, resources, and cost. Before projects are planned, business cases are developed for cost-benefit analysis, with rough estimates of the schedule and cost of project completion to determine return on investment. This initial estimate may become an anchor from which future project planning efforts are projected. The initial anchored value may present a risk to the accuracy of the final output with a skew to the original anchor, as individuals are prone to stay with initial values and make inadequate modifications to the anchor. A change in an initial value, then, results in a relative changed final value (Son & Rojas, 2011). To reduce the effects of anchoring in duration or resource estimates, the estimator or predictor should not be given a suggested estimate in advance. For example, telling the estimator to stay within 50 days duration, or suggesting that it is thought the activity will only take 50 days, will anchor the estimator’s mind to the 50-day number, thus causing an inaccurate and potentially optimistic plan.

    The anchoring effect has been studied in neuroscience (Li et al., 2017) with evidence the effect could even be changed with Transcranial Direct Current Stimulation (see example here:

    https://www.frontiersin.org/articles/10.3389/fpsyg.2017.01079/full

  • Attribution Error

    People suffer from self-serving attribution bias, whereby they overestimate the importance of their own judgment when making adjustments to statistical forecasts (Hilary & Hsu, 2011; Libby & Rennekamp, 2012). But what is attribution error exactly? It is a very recognizable bias, of which we all have been guilty at one time or another. Attribution error is the tendency for people to attribute another’s actions to factors internal to the person (e.g., character, motivation), while attributing one’s own actions to factors external to the self and out of one’s own control (e.g., getting sick and missing a deadline, being distracted because of troubles at home).

    A recognizable example is the project employee that arrives late at the project meeting, or you yourself arrive late for the project meeting. We tend to attribute the other person’s tardiness to their own fault: they are lazy, they didn’t get up in time, and they didn’t keep track of their agenda. While if you are late, you attribute it to the traffic jam, the telephone call that held you up, or a family emergency. Fundamental attribution error can lead to friction between colleagues, as it disturbs the image you may have of each other. While you know everything that happens in your life and can play a role in events such as being late for a meeting, you do not know the entire collection of life events from your colleague that may have led to their tardiness. You never see the whole picture. In this sense, it is a heuristic: we make a mental shortcut based on the few things we know of the other person, in order to form a complete picture.

    Overcoming attribution error is difficult, as it is so ingrained in our thought process. However, there are a few actions you can take. One, of course, is getting to know the other person better, thereby gaining empathy and knowledge about their lives. Sharing a coffee in the break room is always a great place to start. Second, and perhaps a bit more formal, is noting down five positive characteristics of the person you are starting to view in a negative light. This may reset your attitude towards them. And third, as with all biases, it is important to simply be aware of what you are doing and how you are making your attributions. This self-awareness can help in mitigating your attribution error response.

    Scans of the brain have identified certain regions (Moran et al., 2014) associated with attribution error (see example here: https://dash.harvard.edu/bitstream/handle/1/13457155/jocn_a_00513.pdf? sequence=1&isAllowed=y

  • Confirmation Bias

    Confirmation bias (Wason, 1960) explains the tendency of people seek or evaluate information in a way that fits with their existing beliefs, thinking, and preconceptions. Confirmation bias has also been shown to be related to unmotivated processes, which include primacy effects and anchoring; a reliance on information encountered early in a process (Nickerson, 1998). This bias has often been considered one of the most dangerous biases as it tends to direct people away from rational and logical conclusions, and can sometimes be intentional in nature.

    As people evaluate risk and other uncomfortable facts in the project, watch for the occurrence of confirmation bias, as it is one of the most prevalent and may be a way for people to avoid the mental discomfort associated with cognitive dissonance.

    Confirmation bias is often mitigated by considering the opposite. In other words, when faced with confirming what you already believe, consider the alternative to what you believe. What is the opposite viewpoint, and could that be more accurate?

    Click the link here to see an example of confirmation bias on the brain:

    http://affectivebrain.com/wp-content/uploads/2019/12/s41593-019-0549-2.pdf

  • Hot-Cold Empathy Gap

    Humans have a difficult time predicting how they will behave in the future. A hot-cold empathy gap happens when people underestimate the impact of visceral states (e.g. being angry, in pain, or hungry) on their behavior or preferences (Loewenstein, 2005).

    In projects, the hot-cold empathy gap may be seen when making predictions regarding project work. For example, people may not anticipate how they will feel when they get behind schedule in the future. The hot-cold empathy gap can be responsible for plans or forecasts that are significantly pessimistic or optimistic because the person making the prediction is not accurately estimating the impact of their feelings in the future when their predictions are off.

  • Gamblers Fallacy

    Gambler’s fallacy relates to the independency of subsequent observations. Let us explain with an example: a dice rolls three times 6 in a row. We see this as being highly improbable and are sure that a fourth role will not be a 6. However, every event, or every throw of the dice, is independent of the previous throw. Thus, rolling 6-6-6-6 is as likely as rolling 1-5-4-2.

    We find that hard to believe, though, as often we think that events are interrelated. Sometimes we consciously know that they are not (as you now know about rolling dice), but our intuition insists that they are (Rogers, 1998). People may, for instance, not choose numbers in a lottery that were the winner last time, yet the numbers of the previous time are in no way related to the next time. It would be if the winning numbers were taken out of rotation, but the ‘pot’ of numbers stays the same every round.

    The neural correlates of perceived control in the Gambler’s Fallacy (Shao et al., 2016) has been identified in neuroscience (see example here: https://onlinelibrary.wiley.com/doi/pdf/10.1002/hbm.23098)

  • Halo Effect and Horn Effect

    The halo effect refers to the fact that when we have certain positive impressions about a person or company in one area, this positivity spills over into other areas. When this happens with negative impressions, we talk about the horn effect. If people are shown a picture of a well-groomed man with expensive-looking clothes, they may infer from this that the person is intelligent. However, this information is based on nothing but appearance and is not supported by any reliable information.

    Attractiveness especially is a cause of the halo effect and has been linked to perceived life success and personality (Wade & Dimaria, 2003). In daily work life, the halo (or horn) effect may have an influence on performance appraisals, for example. An enthusiastic employee may receive a positive appraisal, even though their work is not up to par (Schneider, Gruman, & Coutts, L. M., 2012). As you may suspect, this halo effect is heavily subject to a person’s beliefs and perceptions, be they positive or negative (e.g., stereotypes).

  • Herd Behavior

    Herd behavior is a phenomenon from the study of social psychology. It states that people in a group may act differently than they would on their own, to conform to social rules. This confirmation of social rules leads to social acceptance. Individuals also believe that the group as a collection of people has a larger chance of being right than the individual itself, leading them to accept a collective decision or viewpoint.

    A famous investigation of this phenomenon has been done by Solomon Asch (1951). He invited participants for a ‘simple visual experiment,’ where they were asked to judge what the longest line out of three was on a blackboard. The difference with the shorter lines was obvious, so there would be no confusion. The participant, however, was not alone. He was in a group of people who were, unbeknownst to the participant, all collaborators of Asch. When asked the question on which line was the longest, all collaborators chose an obviously shorter line.

    Asch found that about one-third of the participants tended to follow the group’s faulty judgment and repeat their choice of the short line. Over several trials of the experiment, three-quarters of the participants conformed to the majority rule at least once. Herd behavior and group conformity play an important role in decision-making. Think, for instance, of stock market bubbles in the domain of finance (Banerjee, 1992). Herd behavior is influenceable: it can be increased by fear (e.g. Economou et al., 2018), uncertainty (e.g. Lin, 2018), or a shared identity of decision-makers.

  • Hindsight Bias

    I knew it all along! We have all uttered this phrase at times. This is, in fact, called hindsight bias, where the probability of an event occurring seems higher after the event has occurred than before. Think of the financial markets crash several years ago. Many books have been published stating it was unavoidable and that they saw it coming. You would expect this to be published as a warning before the event, not after. The bias is connected to the availability and representativeness heuristic.

    Another effect of hindsight bias is that it can change our memory. Our recollection of events can be influenced when we are given new information and incorporate this into our existing recall (Mazzoni & Vanucci, 2007). Hindsight bias can form a significant problem in areas where accurate recollection is important, such as court cases. In project management, the premortem technique can be used to employ hindsight bias to our advantage: we are asked to imagine being in the future after a project has failed. What could have caused the failure? Were there risks that could have been avoided? This way of remembering has been shown to improve risk detection.

  • Information Avoidance (Deliberate Ignorance)

    Information avoidance (Golman et al., 2017) refers to situations where people choose not to obtain knowledge that is available. In behavioral finance, for example, studies have shown that investors are less likely to check their portfolio when the stock market is down than when it is up, which has been studied under the term the ostrich effect (Karlsson et al., 2009).

    Have you ever turned your head away from something you don’t want to see? What about having a thought of potential project failure enter your mind from a known risk, and quickly try to clear that thought from your head in an effort to deny its existence?

    Information avoidance has been studied in many domains and disciplines. It has also been researched under different names, such as:

    • Deliberate Ignorance (Kutsch & Hall, 2010)

    • Willful Ignorance (Ramasesh & Browning, 2014)

    • The Ostrich Effect (Karlsson, Loewenstein, & Seppi, 2009)

    • Strategic Ignorance (Van der Weele, 2012)

    Information avoidance or deliberate ignorance isn’t as obvious as one might think. Sometimes it happens so quickly in our minds we do not actively acknowledge its existence. Once again, think back to that time when you’ve wanted to shield yourself from that thing you just really didn’t want to know about. Since 80% of us are more optimistic than we are pessimistic, this is natural. We want to forecast our future in a positive light. Therefore we naturally shield ourselves from things that might challenge that belief. We keep ourselves in this state of positive forecasting every day, if not every minute. So, you might say we’ve become so used to it we don’t even actively think about it. Now, if it is fairly natural to shield ourselves from things that might feel bad, why would we be any different on a project? In most cases, we are not.

    There are really two ways to look at ignorance, according to Kutsch and Hall (2010): plain and simple error (unintentional), and irrelevance (more intentional). Kutsch and Hall (2010) break the irrelevance category down into three subdomains:

    • Untopicability

    • Taboo

    • Undecidability

    We will start with defining untopicability. This is information that is considered off-topic, which is the most obvious kind of irrelevance. This is more of a limiting of information on risks and other things that may be pertinent, but are considered out of the range of importance in the given scenario. Think of it this way. You’re in a planning meeting and bring up an external risk to the project. The risk is perhaps out of the project’s control, so it is declared to be something that doesn’t need focus.

    Next, we have the taboo category. Kutsch and Hall (2010) define this as a “moral and/or cautionary restriction placed on action based on what is deemed inappropriate.” This is a big one. I’m sure you’ve been in meetings before where it just became really socially uncomfortable to bring up something that might challenge our unrealistic view of project issues. In this case, you’ve entered the taboo category, where exposure to potential project risk may cause anxiety, so no one discusses it.

    The final category is undecidability. This one is explained by the search for a true or false answer. If there is a lack of data for predicting a risk, then it’s easy for stakeholders to take the ‘out’ of not knowing which risks may be considered true. In this case, the team deems the risk as not pertinent, and it gets removed from the list.

    There are often instances where either you have a choice, your manager has a choice, or everyone in the room has a choice to bring up the uncomfortable risk. And that choice may determine whether or not your project fails. But think about it this way. What if you bring up that uncomfortable potential risk? You may now have the option to mitigate it, and by mitigating it increase the probability of your project succeeding.

  • Ingroup Bias

    Ingroup bias, a social psychological construct, is the preference of one's group over those in outgroups (Hewstone, Rubin, & Willis, 2002; Machunsky, Meiser, & Mummendey, 2009; Mackie & Smith, 1998; Taylor & Doria, 1981). Two primary theoretical viewpoints attempt to explain ingroup bias: realistic conflict theory and social identity theory. Realistic Conflict Theory assumes a demand for scarce resources that drives competition and intergroup conflict, resulting in ingroup bias (Jackson, 1993). Social Identity Theory assumes a person's need to identify with a social group as an underlying cause of ingroup bias (Tajfel & Turner, 1986). Ingroup bias can be characterized by behaviors such as discrimination, prejudice, and stereotyping, as members of the ingroup disfavor members of other groups (Hewstone et al., 2002).

    Team diversity plays a role in the effect of ingroup/outgroup dynamics. A study of team projects showed a tendency for teams to favor ingroup members who are similar over outgroup team members who are dissimilar, with higher trust for ingroup members. Higher functioning of teams may occur with more homogenous teams. However, as ingroups become more diverse within themselves, and the frame of reference dilutes, misunderstanding increases (Nygard, Bender, Walia, Kong, Gagneja, & LeNoue, 2011).

    A review of research done in 2018 (Molenberghs et al., 2018) found various neuroscience studies that show group dynamics associated with ingroup bias (see example here: https://www.frontiersin.org/articles/10.3389/fpsyg.2018.01868/full)

  • Less-is-Better Effect

    When issues are evaluated separately rather than together, decision-makers focus less on important attributes and are influenced more by those attributes that are easier to evaluate. The less-is-better effect bias is a preference reversal when objects are considered together instead of separately (Hsee, 1998).

    This bias can have a significant impact on breaking down activities into smaller components. People may exhibit this bias when having to choose between sets of complex information and sets of less complexity. This bias may sometimes prevent full evaluation of scope and breaking down Work Breakdown Structure (WBS) elements effectively.

    This cognitive bias could easily be considered one of the most impactful to project planning. People desire simplicity. However, simple does not necessarily mean better, more accurate, or truer. Though our brain may have a "good feeling" about something being less or simple, our System 1 may just be fooling us.

    The less-is-better effect can plague the project in many ways, including preventing the team from:

    • Considering more risk

    • Breaking down the scope

    • Unpacking activities into greater detail

    • Evaluating resource needs

  • Mental Accounting

    First, a quote from The Big Bang Theory: (Series 04, Episode 22 – The Wildebeest Implementation)

    Raj: Here, go buy yourself a scone.

    Sheldon: All right.

    Sheldon: I’d like to buy a scone.

    Server: Oh, I’m sorry, we’re out. We have muffins.

    Sheldon: They sound delicious, but this money is earmarked for scones.

    Mental accounting, a concept from behavioral economics, states that people treat money differently, depending on its source or its intended use (Thaler, 1999). For instance, if you get a bonus at work, you may feel more inclined to spend it on frivolous things, more than you would with your regular paycheck. Or, Sheldon Cooper, once you have a destination in mind for the money you have, you may be reluctant to spend it on something else. Money is, in fact, interchangeable or fungible, and has no labels. However, due to mental accounting, we often treat money as being labeled. We think of the value of money in relative rather than absolute terms. We attach value to the deal and what we get out of it (transaction utility; Thaler, 1985).

    Investors, for instance, see gains made often as a separate ‘pot of gold’ that can be used for more high-risk investments, thereby losing sight of the complete picture of the portfolio (Thaler & Johnson, 1990). Banks use this phenomenon to offer multiple bank accounts with different goals (Zhang & Sussman, 2018). In project management, the financial budget is partitioned into different goals across different phases. Once set in place, project managers may find it difficult to transfer money from one goal to the other.

  • Myopic Loss Aversion

    Another financially important bias is that of myopic loss aversion. This occurs when we focus too much on the short term with regard to losses. This reaction can be at the expense of more long-term financial benefits (Thaler et al., 1997). It is a matter of framing (Kahneman & Lovallo, 1993), which we discuss further in future training about Reframing loss. In project management, losses can occur in earlier phases but can lead to financial gains in later periods. However, due to myopic loss aversion, we may focus too much on the initial losses and lose sight of the bigger picture. One can also experience the feeling of loss in many other circumstances as well, such as loss of status, reputation, missed milestones, etc.

    See here for examples of loss aversion in the brain: https://www.jneurosci.org/content/jneuro/33/36/14307.full.pdf

  • Naïve Allocation

    Naïve allocation refers to people’s preference to spread out limited resources evenly across possible locations. A project manager may be tempted to spread out the budget evenly across phases of the project, while the startup phase may warrant more budget than the end phase. A similar bias is related to naïve allocation: diversification bias – this is people’s preference to spread out consumption choices across a variety of goods.

    Both biases can be used to ‘nudge’ people in a certain direction. For instance, consumers can be steered towards choosing more healthy food if the menu is subdivided into different categories for the healthy items (“fruits,” “vegetables”) but not for the unhealthy ones (“candies and cookies”). This subdivision will lead the consumer to choosing more healthy options as it is displayed as a wider range of things (Johnson et al., 2012).

    A predictor may be tempted to spread out the resources and cost across too many activities, reducing the focus and impact. The similar Diversification Bias can have the same effect. In an attempt to mitigate discomfort with low resource availability, the predictor may opt for spreading out the resources. The planning facilitator should be aware of this, as it may be an indication that the resource quantities are not realistic from the start, causing the predictor to succumb to naïve allocation. If naïve allocation appears to be a problem, the facilitator should review the initial resource quantities again with the predictor.

  • Optimism Bias

    Optimism bias (Costa-Font, Mossialos, & Rudisill, 2009), also known as unrealistic optimism (Weinstein, 1980), is the tendency to believe in the reduced risk of facing an undesirable event compared to others. People expect the future to be positive, with minimal evidence to support their expectations. Scans of the brain, with functional magnetic resonance imaging (fMRI), indicate decreased optimism when remembering past events, and increased optimism when thinking about the future. Past events may be more constrained, while future events are open to interpretation, allowing people to mentally detach themselves from possible adverse events (Sharot et al., 2007).

    Optimism bias is prevalent in projects, with 20-45 percent of projects not meeting original cost and schedule baselines (Flyvbjerg, 2006). Optimism is problematic in that it may cause planners to delay other projects, resulting in the use of unanticipated resources (Min & Arkes, 2012). Optimism bias in project planning and control has also been examined in the context of organizational dynamics, where the organization plans many projects before the plans are transferred to the project team for execution and control. Furthermore, when a collective group of individuals are generally optimistic, group discussion makes them more optimistic, causing even more overly aggressive planning (Du, Zhao & Zhang).

    Click link here to see optimism bias on the brain: http://affectivebrain.com/wpcontent/uploads/2014/09/Neural-mechanisms-mediating-optimism-bias.pdf

  • Overconfidence Effect

    Closely related to optimism bias is the overconfidence effect. This occurs when people’s self-confidence is greater than their performance warrants (Pallier et al., 2002). How can we measure this? The usual way is to have people fill out a general knowledge test and have them indicate their confidence level. The actual score on the test can then be compared to the indicated confidence level – the latter is usually higher than the actual performance.

    Overconfidence in project management can lead to the underestimation of risks and the overestimation of success. Moreover, it can increase the planning fallacy, further discussed in the Chapter on Unpacking and Premortem.

    Click link here to see an fMRI study of confidence in the brain: https://academic.oup.com/scan/article/11/12/1942/2544442

  • Pain of Paying

    In the context of financially-related biases, there is the fact that people do not like to spend money. They experience ‘pain of paying’ (Zellermayer, 1996). This is because we are averse to ‘losing’ our money (see also Loss Aversion). While this is important for self-regulating our spending behavior (Prelec & Loewenstein, 1998), it may also lead to a frugal attitude when it’s not warranted.

    Imagine working as a project manager on a big construction project. It may be tempting to go for the cheaper option of materials, because the more expensive one causes the pain of paying. However, often we trade off money for quality. There are individual differences in people with regard to spending money. Some of us are very frugal, while others spend without a second thought. Even the method of payment can cause differences: the pain of paying is less when using a credit card as opposed to cash, because the loss of money is less visible.

  • Present Bias

    The present bias refers to our tendency to give stronger weight to payoffs occurring in the now than those in the future (O’Donoghue & Rabin, 1999). When people in an experiment were asked whether they would like to receive $10 now or $50 in a year, the majority of them opted to go for the low, but instant, pay-off. As is the case with loss aversion and other financially related biases, we are myopic in our choices and preferences. In other words, we prefer instant gratification and we are impatient when it comes to money.

    You will see the impact of present bias all throughout this manual, as the tendency for a preference for reward, positive feelings, and avoidance of mental discomfort is strongest in the present. Present bias, also known as Hyperbolic Discounting, can have a strong impact on the consideration of risk, because humans often seek satisfaction in the present moment with less consideration of future consequences. This causes an especially skewed view of the future, and risk consideration is most impacted because risk is something that belongs to future events.

  • Planning Fallacy

    Kahneman & Tversky (1979) found that people have a tendency to underestimate durations of tasks. This finding is critical to project management, because projects and temporary organizations are made up of a series of tasks (Lundin & Söderholm, 1995), and rely on the completion of those tasks in order to deliver an outcome within a specific period of time. The planning fallacy can often take the form of optimism bias that influences unrealistic project planning (Peetz, Buehler, & Wilson, 2010).

    It should be noted that there could be many elements that contribute to the planning fallacy, such as optimism bias, the overconfidence effect, deliberate ignorance (also known as the ostrich effect), and the anchoring effect, to name a few. While we could cover the planning fallacy a lot here, what you will find is that much of the NeuralPlan training is about solving the planning fallacy. There are so many contributors to the planning fallacy, from the cognitive moderators, to almost all of the cognitive biases. The important thing to remember is that an optimistic plan output does not necessarily mean optimism bias.

  • Regret Aversion

    The earliest foundation for regret theory is the minimax principle described by Savage (1954), which prescribes that one should select the option that minimizes one’s maximum regret. Later on, Bell (1982) and Loomes and Sugden (1982) incorporated regret into a theory of choice. Regret is related to counterfactual thoughts about “what could have been” (Van Dijk & Zeelenberg, 2005) and can be defined as “a more or less painful cognitive and emotional state of feeling sorry for misfortunes, limitations, losses, transgressions, shortcomings or mistakes” (Landman, 1993, p. 36).

    People make decisions that shield themselves from the possibility of regret (Van Dijk & Zeelenberg, 2007). To experience regret, the current condition is compared with what would have been if one had decided differently. If the choice is better than the other outcomes, people will rejoice; when a different choice would have led to a better outcome, people will experience regret (Kahneman & Miller, 1986). Indeed, the comparison is key in regret theory (D. E. Bell, 1982; Loomes & Sugden, 1982). This has been confirmed by neuroimaging studies: the brain region that lights up after having made a poor choice lights up as well before making the actual choice (Coricelli et al., 2005).

    It has been argued that individuals are motivated to avoid regret because it calls into question whether they have made competent decisions (Josephs, Larrick, Steele, & Nisbett, 1992; R.P. Larrick, 1993). In general, people wish to avoid any negative feelings associated with regret and therefore choose the option associated with minimizing regret. Consequently, regret is most likely to influence risky decisions if feedback of their choice is present (Josephs et al., 1992; R. P. Larrick & Boles, 1995; Ritov, 1996; Zeelenberg, Beattie, van der Pligt, & de Vries, 1996). Indeed, the key to the anticipation of regret is the presence of feedback regarding the alternatives that were not chosen (D. E. Bell, 1983; Josephs et al., 1992; R.P. Larrick, 1993; R. P. Larrick & Boles, 1995).

    If, for instance, a person is asked to choose between a sure gain of $90 and a coin toss with $200 for heads and $0 for tails, there will be no knowledge about what could have been if the person chooses for a sure gain, and thus no possibility for regret aversion to influence the decision (unless the coin is tossed anyway). If, however, the coin toss is given and the person gets tails ($0), the person knows that choosing the sure gain of $90 would have been better. Accordingly, if the decision-maker wishes to avoid regret, the best alternative is to choose the sure gain and, thus, choose the risk-averse option (R. P. Larrick & Boles, 1995). In other words, whether or not a person makes risk-averse decisions depends on the expectation that one will receive feedback or not on the foregone alternatives.

    When people avoid feedback on foregone options (by choosing a sure gain), they minimize their chance of regret in the short term, yet they also miss a chance on learning from their decisions in the long term. This is called myopic regret avoidance (Reb & Connolly, 2009). Myopic regret avoidance is associated with outcome regret avoidance, i.e., avoiding feedback with regard to immediate outcomes. Regret aversion can lead to paying a “regret premium,” or the utility that one is willing to give up in order to avoid future regrets (D. E. Bell, 1983). Indeed, people have been found to forego a direct gain if this prevents them from experiencing regret later on (e.g., Van de Ven & Zeelenberg, 2011).

    Regret aversion has been shown to influence a wide range of decisions (Zeelenberg & Pieters, 2007). For instance, it has been found to influence cooperation in negotiation situations (R. P. Larrick & Boles, 1995), lottery participation (Zeelenberg & Pieters, 2004a), insurance buying (Hetts, Boninger, Armor, Gleicher, & Nathanson, 2000), the reluctance to exchange lottery tickets (Bar-Hillel & Neter, 1996; Van de Ven & Zeelenberg, 2011), immunization decisions (Wroe, Turner, & Salkovskis, 2004), and a wide range of laboratory gambles (e.g., Zeelenberg, et al., 1996).

    Neuroimaging studies have been done (Coricelli et al., 2005) that identify areas of the brain associated with regret aversion (see example here: https://pure.mpg.de/rest/items/item_2615172/component/file_2622686/content)

  • Scarcity

    When a resource, object, or time is not as readily available (e.g., due to limited quantity), we tend to perceive it as more valuable (Cialdini, 2008). Scarcity is often used in marketing to get people to buy. Marketing messages use appeals that indicate limited quantity and are thought to be more effective than limited-time appeals because they create a sense of competition among consumers (Aggarwal et al., 2011).

    An experiment (Lee & Seidle, 2012) using wristwatch advertisements exposed participants to one of two different product descriptions “Exclusive limited edition. Hurry, limited stocks” or “New edition. Many items in stock”. The participants then had to indicate how much they would be willing to pay for the watch. The average consumer was willing to pay an additional 50% if the watch was advertised as scarce.

    Scarcity can be used as a strategy by practitioners to nudge people who put off decisions (myopic procrastinators) to act (Johnson et al., 2012). Scarcity may have very large impacts on project prediction, especially in the domain of time scarcity. As the project gets closer and closer to its deadline, not only does time-pressure go up but so does the anxiety associated with time being a depleting resource.

  • Status Quo Bias

    We like things the way they are and are reluctant to change. This is called status quo bias, where we prefer things to stay the same by not undertaking any action (closely related inertia). It could also mean that we have taken a decision and refuse to change it (Samuelson, & Zeckhauser, 1988), despite the importance of the decision and the potential of a changed decision to lead to a better outcome. Status quo bias is closely related to loss aversion (see Prospect Theory), precommitment (see choice architecture), sunk cost fallacy (discussed below), cognitive dissonance, regret avoidance (see above), and feelings of control.

    While changing ideas may lead to a better outcome, it’s cognitively effortful and it is often considered ‘safer’ to just 'stick to your guns' and remain with the status quo. This is especially true given that we suffer from bounded rationality in our reasoning, scarcity, difficulty in information processing, etc. The effect of status quo bias can be enhanced in cases of choice overload (Dean et al., 2017) or high uncertainty and deliberation costs (Nebel, 2015).

  • Sunk Cost Fallacy

    We have all experienced sunk cost fallacy at one point or another. It’s highly probable that you have experienced it with your first car. It’s often second-hand, old, and barely running. Every few months there are additional costs that may occur from repairs. But at what point do you stop investing in your car and consider it a loss?

    We tend to keep investing in something in which we already have financially involved ourselves. If we stop, we feel as if these costs were losses and we are generally loss averse. It can also be the result of status quo bias or an ongoing commitment. Sunk cost fallacy can refer to invested time, money, and even effort (Arkes & Blumer, 1985).

    Sunk cost fallacy can also have an impact on project decisions in regard to invested resources. The bias can be especially impactful to failing projects, where stakeholders refuse to pull out of a doomed project because of the investment they have put into the project so far. The sunk cost fallacy should never be used to make project decisions.