top of page
Search
  • Writer's pictureSaransh Sharma

SELF NUDGING: BIAS BLINDSPOT, NUDGING & LIBERTARIAN PATERNALISM


Biases occur when individuals deviate from rational actions and decisions in predictable patterns (Croskerry et al., 2013). While random deviations from rationality would cancel each other out, these biases deviate in unique, predictable patterns across individuals and therefore, they can't just be seen as noise or glitches but rather as permanent features of human behavior.

Seen another way, a bias is simply a tendency, implicit or explicit, to reach or sustain a particular kind of conclusion or outcome. This interpretation accommodates the idea that biases can skew a decision in ways that make its outcomes sub-optimal or otherwise wrong, but it also leaves open the prospect that biases play a role in reasoning processes and morally sound judgments or attitudes. (Kenyon & Beaulac, 2014).

Biases may be either innate or learned. Innate biases are hardwired into our brain networks and cognitive processes, or they are regulated by our emotions. Innate biases are much like optical illusions - even once you recognize the illusion, you don't really stop seeing it. Learned biases are acquired either through conditioning wherein certain patterns are embedded in our cognition or behavior through deliberate repetition, or they are acquired implicitly from our environment without our conscious awareness (Croskerry et al., 2013). Stereotyping is an example of learned biases, and these biases can be unlearned, although that may be easier said than done, as we will see later in the video.

Now, the question arises - Why do these biases exist and why did evolution favor certain deviations from rationality?

Firstly, these biases and heuristics act as effective and simple shortcuts or rules-of-thumb which saved our ancestors precious time and cognitive resources in dealing with certain situations and problems which were either recurrent or critical for survival during our evolutionary history. These shortcuts may be constrained and sometimes imprecise, but nevertheless are well-adapted to our ancestors' unique needs and challenges for survival.

Next, there's the argument that certain kinds of errors were less costly over our evolutionary history i.e. they did not have a significant negative impact on our ancestors' survival e.g. being too paranoid about parasites. And therefore, such errors were not exposed to selection pressures, producing biases in the direction of less costly errors.

And lastly, some researchers suggest biases show up due to bad research strategies, wherein people are given problems in unnatural formats or are evaluated on the basis of questionable modern, idealized standards of behavior and rationality. This view implies these tendencies are well-tailored solutions to important and persistent natural problems that may sometimes perform at their worst in the peculiar modern world (Haselton et al., 2009).

Now having said that, cognitive biases do sometimes lead to distorted perceptions, inaccurate judgments, illogical interpretations, or what researchers broadly refer to as irrationality. These adverse impacts of biases are amplified by the fact that many institutions and systems are designed assuming rational decision making agents. For example, healthcare system is designed assuming that patients will act rationally i.e. they'll consult healthcare providers immediately, provide complete and accurate information about their symptoms and history, and adhere to medical advice. But as we have seen during the COVID-19 pandemic, people do not always act rationally, even when their health is at stake.

Also, biases and irrational behavior are often exploited to manipulate people. Politicians use them to influence voting behavior, brands use them to drive impulsive purchases and social media platforms use them to get us hooked.

And finally, biases may lead to social problems such as prejudice, discrimination, conflict and selfish acts that lead to tragedy of commons.

Next, let's look at some strategies to manage our biases such that their negative impacts can be minimized.

When we think of managing biases, it is natural to assume that we cannot always control or change the situations in which biased judgements occur, and therefore a more feasible approach is to provide people with some combination of training, knowledge, and experience to help overcome their limitations and errors. This approach involves imparting education about biases as well as thinking strategies, rules of thumb, habits and skills that facilitate adopting general reasoning and decision-making principles. Let's look at some of these techniques.

First, people can be taught appropriate rules and principles for rational decision making, such as a) generating appropriate number of alternatives before deciding, b) tempering optimisms by thinking of the opposite or articulating reasons why an initial assessment might be wrong, c) consulting others to accumulate diverse viewpoints, and d) assessing uncertainty by identifying known and unknown variables.

Next, using simple numerical models such as ranking, scoring and weighting can eliminate the opportunity for bias to creep into a decision and reduce its quality (Soll et al., 2014).

Another promising debiasing technique being explored by researchers is decision-making simulations and games which provide practice and personalized feedback for managing biases. Studies have found that the effects of such training are domain-general: bias reduction occurs across problems in different contexts (Morewedge et al., 2015).

But there are some major shortcomings of the education and training approach.

Firstly, in many aspects of behavior, we observe gaps between our intentions and our actions. This gap can be understood in terms of the difference between liking and wanting. Liking is a preference or an attitude, while wanting is a visceral, motivational drive that triggers directed action. Environmental cues can lead to sudden peaks of ‘wanting’ without changing ‘liking’. These increased degrees of motivation lead to impulsive decisions that are characterized by gaps in ‘wanting’ and ‘liking’ (Lades, 2012). E.g. we may 'like' to quit smoking, but 'want' to smoke when we see or crave a cigarette. Similarly, we may like to be unbiased, but out motivations and drives in the situation of biased decision-making may not allow that.

Next, there is a bias called bias blind spot, that prevents us from seeing our own biases. Studies have found that people consider their own judgments less susceptible to bias than the judgments of others (Ehrlinger, 2005). And if we can't identify our biases, then we can't hope to manage them.

The first cause of the bias blindspot is people’s willingness to take their introspections about the sources of their judgments and decisions at face value—that is, to treat the lack of introspective awareness of having been biased as evidence that one is innocent of such bias. Because many biases work below our conscious awareness and leave no trace of their operation, an introspective search for evidence of bias often turns up empty. When we fail to detect evidence of bias, we decide that no bias has occurred and that our decision-making process was indeed objective and reasonable. One cannot conduct the same internal search to determine the causes of others’ judgments, so we rely on behavioral information for evaluations of others, but on introspection for evaluations of ourselves (Pronin et al., 2004; West el al., 2012).

The second cause is naive realism , which is the conviction that one sees the world as it is and not as it is filtered through one’s expectations, needs, motives, or cognitive apparatus.

What follows from this conviction are the expectations that “objective and reasonable others” will share one’s perceptions and judgments, and the inference that those who do not share one’s perceptions and judgments are therefore either uninformed or biased (Pronin, 2007).

Now, bias blind spot can be problematic in multiple ways. It amplifies interpersonal and intergroup conflicts as individuals believe that they are being objective while the other is being biased (McPherson Frantz, 2006). It also makes us less receptive to advice of others (Scopelliti et al., 2015). But most importantly, it means that knowledge of particular biases in human judgment, and the ability to recognize the impact of those biases on others, neither prevents us from succumbing to these biases, nor makes us aware of our biases when they occur (Pronin et al., 2002).

What's more - studies have found that intelligence or cognitive abilities do not diminish the bias blind spot. This is because while intelligent individuals may have better abilities to manage their biases, they expect themselves to be less biased and more objective than others, and therefore they may not easily recognize situations where bias mitigation is required (Stanovich & West, 2008; West el al., 2012).

So we can see that the bias blind spot and the liking-wanting gap significantly undermine the education and training approach of debiasing. This necessitates other ways of bias mitigation. Now let's take a closer look at the approach which seeks to alter the environment to provide a better match and response for the people's natural decision making tendencies, instead of trying to reeducate them to change these tendencies.

There are two very different ways to modify the environment. One general approach is to change something about the situation that spurs people to process information more appropriately. A second approach adapts the environment to people’s biases (Soll et al., 2014). Both these approaches can be categorized under the broader concept of nudging. Nudging is defined as the careful design of consumerś choice environments in order to steer their behaviour in desired directions without compromising their freedom of choice. Nudges are the strategies or interventions that are put in place for this purpose. They typically work through either harnessing people’s cognitive biases or minimizing them (Torma et al., 2017; Schubert, 2014).

Now the first type of nudges work by appropriately activating our analytical system which can detect and override biases in situations where such biases may occur. The key is intervening at the time that the bias occurs or is about to occur.

The analytical system may be engaged by building planned interruptions into choice environments in order to encourage added reflection, or by requiring decision makers to make an active choice between multiple options rather than simply avoiding a choice and accepting a default. E.g. Social media platforms asking you to read the article before sharing it, Email platforms asking you whether you have forgotten an attachment or operating systems asking you whether you're sure you want to close multiple tabs.

Another way of engaging the analytical system is framing the information such that it is easily understood and linked with the decision-maker's goals and interests. E.g. Instead of leaving consumers to compare multiple specifications of a device like a smartphone, information could be provided in terms of what purpose is a device suited for like durability, viewing, recording, browsing, etc. (Soll, 2014). Highlighting important aspects, giving summaries and disclosing less obvious information could help improve decision quality (Weßel et al., 2019).

The second type of nudges work by using people's biases to steer their behavior in the desired direction, instead of trying to reduce the bias. These nudges may take multiple forms - incentives, which realign people's motivation, or choice architecture, wherein alternatives are presented in a manner such that the best option seems more favorable, or defaults which leverage people's decision inertia and implicitly suggest a recommended course of action, or social proof which uses other's decisions to influence towards a recommended course of action, or prompts such as warnings, reminders and feedback (Soll et al., 2014; Weßel et al., 2019). These are only a few among many nudges that are being deployed across the globe to drive better decisions and desirable behaviors.

Based on studies and real-world data, it is clear that nudges are more effective than debiasing education and training (Lin et al., 2017). Nonetheless, nudging comes with it's own set of limitations and challenges.

Firstly, the question arises - who nudges whom? And since the origin of the concept, the picture has been one of expert nudgers relegating the nudged into a passive role. Those subject to the nudge are understood to be neither self-aware enough to employ the behavioral tool themselves without the intervention, nor sophisticated enough to see through the behavioral strategy to escape it (Tontrup & Sprigman, 2019). Being nudged can be seen as partially outsourcing of our agency to our context, which is being manipulated by governments and businesses, purportedly in our own interest (Schubert, 2015).

Now advocates of nudging counter these criticisms by arguing that it's based on the ethically sound principle of libertarian paternalism wherein people’s behavior is altered in a predictable way without forbidding any options or significantly changing their economic incentives. They suggest that to count as a nudge, an intervention must be easy and cheap to avoid (Schubert, 2015).

But other researchers point out three major shortcomings of libertarian paternalism: First, it is not clear how nudgers can detect the choices that individuals judge to be in their best interests. Second, by discouraging active choice, nudges may reduce the individual capability to make critically reflected, autonomous decisions. Finally, over time, nudges can put individuals at danger to “learn” being helpless and to lose their ability of pursuing their own happiness (Binder & Lades, 2013).

Overall, nudges that activate our analytical system are more ethical and well-being conducive than nudges that harness our non-conscious biases. Nonetheless, all nudges create some type of dependency on the nudger and have long-term impact on autonomy and agency.

Therefore, the final question we must ask is - is there an approach of managing biases that hits that sweet spot of being effective and ethical. Well, this brings us to our final concept - self-nudging.

self-nudging is characterized by the nudged becoming the nudgers. This means that self-nudging is self-determined and it rests on the assumption that individuals can learn to understand the environmental factors that challenge their decision-quality and self-control and are able to apply the same evidence-based principles that nudging uses in the public sphere to their own immediate environments to improve their decision-making outcomes (Torma et al., 2017; Reijula & Hertwig, 2020). Proponents of self-nudging suggest that knowledge about nudges and how they work can be actively shared with the general public, thereby enabling people to address what they perceive as failures of self-control and decision-making.

Now let's look at some ways in which you can nudge yourself towards better decisions for your well-being

(1) Use reminders and prompts like leaving your running shoes by the front door so when you see them it’ll be easier to slip them on and go for a run. Or use an app to remind you to stick to healthy habits during the day or invest money regularly.

(2) Choose a different framing. E.g. instead of deciding between taking the elevator or the staircase based on convenience, we can reframe the decision from the lens of health promotion.

(3) Reduce the accessibility of things that can harm us by making them less convenient or, conversely, we can make it easier to do the things we want to do -- for example, change the default settings of your devices and disable notifications from social media apps. Similarly, push the cookies to the back of the cupboard, and place the healthy nuts in front of them.

(4) Use social pressure to increase accountability. For example, Make a public declaration about your saving goals, and enlist your family, friends or social media network to help you make good on it.

(5) Use defaults e.g. setting up automatic contributions towards your savings account on a monthly basis (Reijula & Hertwig, 2020)

(6) Use commitment devices which restrict your own choice set and freedoms in situations where you expect self-control issues in the future. E.g. Only taking a fixed amount of cash when heading out to party for a night or buying an annual gym membership instead of paying by the visit (Bryan et al., 2010; Rogers et al., 2014).

REFERENCES

Croskerry, P., Singhal, G., & Mamede, S. (2013). Cognitive debiasing 1: origins of bias and theory of debiasing. BMJ Qual Saf 22, ii58–ii64.

Stanovich, K. E. (1999). Who is rational?: Studies of individual differences in reasoning. Psychology Press.

Kenyon, T., & Beaulac, G. (2014). Critical thinking education and debiasing.

Haselton, M. G., Bryant, G. A., Wilke, A., Frederick, D. A., Galperin, A., Frankenhuis, W. E., & Moore, T. (2009). Adaptive rationality: An evolutionary perspective on cognitive bias. Social Cognition, 27(5), 733-763.

Soll, J. B., Milkman, K. L., & Payne, J. W. (2014). A user's guide to debiasing.

Morewedge, C. K., Yoon, H., Scopelliti, I., Symborski, C. W., Korris, J. H. & Kassam, K. (2015). Debiasing Decisions. Improved Decision Making With A Single Training Intervention. Policy Insights from the Behavioral and Brain Sciences, 2(1), pp. 129-140.

Lades, Leonhard K. (2012) : Impulsive consumption and reflexive thought: Nudging ethical consumer behavior, Papers on Economics and Evolution, No. 1203, Max Planck Institute of Economics, Jena

Ehrlinger, J., Gilovich, T., & Ross, L. (2005). Peering into the bias blind spot: People’s assessments of bias in themselves and others. Personality and Social Psychology Bulletin, 31(5), 680-692.

Pronin, E., Berger, J. A., & Molouki, S. (2007). Alone in a Crowd of Sheep: Asymmetric Perceptions of Conformity and Their Roots in an Introspection Illusion. Journal of Personality and Social Psychology, 92 (4), 585-595.

Pronin, E., Gilovich, T., & Ross, L. (2004). Objectivity in the eye of the beholder: divergent perceptions of bias in self versus others. Psychological review, 111(3), 781.

Pronin, E., Kruger, J., Savtisky, K., & Ross, L. (2001). You don't know me, but I know you: The illusion of asymmetric insight. Journal of Personality and Social Psychology, 81(4), 639.

West, R. F., Meserve, R. J., & Stanovich, K. E. (2012). Cognitive sophistication does not attenuate the bias blind spot. Journal of personality and social psychology, 103(3), 506.

Pronin, E. (2007). Perception and misperception of bias in human judgment. Trends in cognitive sciences, 11(1), 37-43.

Pronin, E., Lin, D. Y., & Ross, L. (2002). The bias blind spot: Perceptions of bias in self versus others. Personality and Social Psychology Bulletin, 28(3), 369-381.

Nasie, M., Bar-Tal, D., Pliskin, R., Nahhas, E., & Halperin, E. (2014). Overcoming the barrier of narrative adherence in conflicts through awareness of the psychological bias of naïve realism. Personality and social psychology bulletin, 40(11), 1543-1556.

McPherson Frantz, C. (2006). I AM Being Fair: The Bias Blind Spot as a Stumbling Block to Seeing Both Sides. Basic and Applied Social Psychology, 28(2), 157–167.

Scopelliti, I., Morewedge, C. K., McCormick, E., Min, H. L., Lebrecht, S., & Kassam, K. S. (2015). Bias blind spot: Structure, measurement, and consequences. Management Science, 61(10), 2468-2486.

Stanovich, K. E., & West, R. F. (2008). On the relative independence of thinking biases and cognitive ability. Journal of personality and social psychology, 94(4), 672.

Torma, G., Aschemann-Witzel, J., & Thøgersen, J. (2017). I nudge myself: Exploring “self-nudging” strategies to drive sustainable consumption behaviour. International Journal of Consumer Studies, 42(1), 141–154.

Weßel, G., Altendorf, E., Schwalm, M., Canpolat, Y., Burghardt, C., & Flemisch, F. (2019). Self-determined nudging: a system concept for human–machine interaction. Cognition, Technology & Work.

Schubert, Christian (2015) : On the ethics of public nudging: Autonomy and agency, MAGKS Joint Discussion Paper Series in Economics, No. 33-2015, Philipps-University Marburg, School of Business and Economics, Marburg

Reijula, S., & Hertwig, R. (2020). Self-nudging and the citizen choice architect. Behavioural Public Policy, 1-31.

Tontrup, S., & Sprigman, C. J. (2019). Experiments on Self-Nudging, Autonomy, and the Prospect of Behavioral-Self-Management. NYU Law and Economics Research Paper, (19-41).

Binder, Martin; Lades, Leonhard K. (2013) : Autonomy-enhancing paternalism, Papers on Economics and Evolution, No. 1304, Max Planck Institute of Economics, Jena

Lin, Y., Osman, M., & Ashcroft, R. (2017). Nudge: concept, effectiveness, and ethics. Basic and Applied Social Psychology, 39(6), 293-306.

Max-Planck-Gesellschaft. (2020, May 11). Using self-nudging to make better choices. ScienceDaily.

Bryan, G., Karlan, D. S., & Nelson, S. (2010). Commitment Devices. Annual Review of Economics, 2, 671-698.

Rogers, T., Milkman, K. L., & Volpp, K. G. (2014). Commitment Devices: Using Initiatives to Change Behavior. JAMA (Journal of the American Medical Association), 311 (20), 2065-2066.

310 views0 comments
bottom of page