Metacognition is commonly understood as “thinking about thinking,” however it involves not just thinking but the active directing of cognitive processes. Martinez (2006) proposes a “more precise definition: metacognition is the monitoring and control of thought” (p. 696). While cognitive processes are those “invoked to make cognitive progress,” metacognitive strategies monitor that progress (Flavell, 1979, p. 909). The father of the study of metacognition, John Flavell illustrates this distinction as between simply re-reading a chapter in preparation for an exam and assessing your understanding by asking yourself questions about it (1979, p. 908-9). Metacognition is active and self-reflective; it involves setting goals, selecting strategies, monitoring progress, evaluating outcomes, and dynamically adjusting your approach as necessary. Increased metacognitive knowledge generally improves performance across tasks (Schraw & Moshman, 1995, p. 354), and it is considered the hallmark of effective learners (e.g. Martinez, 2006). It has been also been found to be more predictive of success than aptitude in applications like problem-solving (Swanson, 1990). For all its benefits, metacognition is also a taxing and resource-heavy process that is carried out in parallel to the main task. Likewise, decision-making is very demanding and complex process that humans are not always very good at, and which involves many of the same skills of analyzing, monitoring, and evaluating. By understanding the mental operations and patterns that underlie metacognition and decision-making, designers can strive to produce products that support rather than hinder these vital but difficult processes, as may be illustrated by the online car-buying website, Carvana.
According to Flavell’s initial model, metacognition involves four components: metacognitive knowledge, metacognitive experiences, goals or tasks, and actions or strategies (1979, p. 906). Further, metacognitive knowledge is comprised of three categories: person (“the nature of yourself and other people as cognitive processors”), task (“the information available to you during a cognitive enterprise” as well as “task demands or goals”), and strategy (both cognitive and metacognitive) knowledge (Flavell, 1979, p. 907). Metacognitive experiences refer to the feelings and reflections an individual is aware of during an information processing task. These experiences can lead to establishing or revising current goals, updating your metacognitive knowledge base through assimilation and accommodation, and activating either cognitive or metacognitive strategies (Flavell, 1979, p. 908-9). Metacognitive strategies are the techniques involved in cognitive regulation, including planning, monitoring, and evaluating processes (Lai, 2011).
Nelson & Narens introduced a two-level cognitive process model that includes the interconnected meta- and object-levels (1990). The meta-level “contains a dynamic model of the object-level” and the two act on one another through “control” (meta on object) and “monitoring” (object to meta) during the acquisition, retention, and retrieval of information (Nelson & Narens, 1990). In terms of the learning process, categories of to-be-monitored items outlined by Nelson & Narens include predicted “ease-of-learning” (EOL), “judgments-of-learning” (JOL) or predictions of future test performance, and “feeling-of-knowing” (FOK) or “whether a given currently nonrecallable item is known” and may retrieved in the future (1990, p. 101).
Schraw & Moshman offer a streamlined conception of metacognition that differentiates between metacognitive knowledge and active metacognitive control processes (Schraw & Moshman, 1995, p. 352). According to Schraw & Moshman, knowledge may be of three types: declarative knowledge of one’s self as a learner (Flavell’s person), procedural or “how to” knowledge of skills, and conditional or knowing when and why to apply various cognitive actions (1995, p. 352-3). Metacognitive control processes, or regulation, involves a wide range of activities, but they can generally be classed into three essential skills: planning, monitoring, and evaluation (Schraw & Moshman, 1995, p. 354; Lai, 2011, p. 7). Planning involves actions like making predictions, allocating time or attention, while monitoring is the ongoing awareness and assessment of task performance, and evaluation consists of appraising both the final outcome and the effectiveness of the process used to arrive at it. Metacognitive knowledge is not necessarily stable, that is, many people are not able to explicitly describe their cognitive processes (Schraw & Moshman, 1995, p. 354) and certain processes are often highly automated among adults (p. 356), though there is some debate over whether metacognition should refer to strictly conscious processes (e.g. Eklides, 2008).
Models of Decision-Making
Traditional models of decision making were based on expected utility and predicated on the assumption of rationality. The basic model of rational decision-making is a compensatory one of holistic evaluation and trade-offs. This process involves identifying all of the attributes that impact the desirability of an alternative and assigning them relative importance, “computing an overall value for each option based on the impact of attribute and relative weight, and selecting the option with the best value” (Straub, 2003). This is a very labor-intensive and cognitively-demanding process. In practice, humans do not always behave like purely rational “economic agents” (Econs), and in fact have proven to be highly suggestible and often irrational (Thayler, Sunstein & Blatz, 2013, p. 429). Expected utility is therefore insufficient to explain human decision making, prompting newer descriptive models that more closely reflect observed behaviors (Tversky & Kahneman, 1992, p. 298) encompassing the use of using non-compensatory strategies and heuristics as shortcuts.
When the decision involves a large number of alternatives that makes individual evaluation impractical, a strategy called elimination by aspects (EBA; Tversky, 1972) may be used. In this model, each alternative can been seen as a set of characteristics, or aspects. Particular aspects are selected sequentially, likely but not necessarily according to their importance or weight, and all alternatives that do not include that aspect are discarded. According to Tversky, this process continues until a single alternative remains, though others (Thayler, Sunstein & Blaz, 2013, p. 436) have suggested that when the pool of alternatives becomes small enough, a switch compensatory evaluation may be made. This strategy reduces time and effort but may discard a potentially optimal alternative if it fails to meet an early criterion, even if other aspects make up for that deficit.
How likely someone is to use exhaustive rational evaluation of alternatives vs. various shortcuts is based on individual differences. Simon first introduced the concept of “satisficing,” or a willingness to end the search process when a “good-enough” alternative is found (1972). This can be contrasted with a category of people called “maximizers,” who are more likely to seek all possible alternatives, hold high standards aimed at selecting the “best,” and to experience decision difficulty (Nenkov et al., 2008). Typically, although maximizers do tend to make objectively better choices, they are less satisfied with their decisions than satisficers (Schwartz, et al., 2002; Iyengar, Wells & Schwartz, 2006), although recent work by Kim & Miller links dissatisfaction specifically with the aspect of decision difficulty, who suggest that it should be considered as a separate profile (2017).
Another descriptive model is Tversky & Kahneman’s Prospect Theory (1979; 1992), which provides insight into how uncertainty affects decisions. The theory incorporates the tendencies of people to focus on gains or losses instead of objective value and to stray from a strict statistical evaluation, in which they overweight small probabilities and underweight high probabilities (Tversky & Kahneman, 1992 p. 316). This understanding of human choices as “orderly, although not always rational in the traditional sense,” (1992, p. 317) led them to work codifying a range heuristics that describe human tendencies in decision-making. This work is grounded in an understanding of a two-level system of cognitive processing, where “System 1” works automatically, intuitively, and subconsciously, while “System 2” is what is commonly understood as the rational self, which Kahneman describes as “thinking fast and slow” (2011). System 1 supplies rapid judgements that are either accepted or, less frequently, questioned and reassessed more laboriously by System 2 (Kahneman, 2003, 2011). Typically, as a function of system 1, “heuristics are highly economical and usually effective, but they lead to systematic and predictable errors,” referred to as biases (Tversky & Kahneman, 1974, p. 1131).
Heuristics & Biases
How a choice is presented can shape responses significantly. “Framing effects” refer to the observed trend that when choosing between risky prospects, “individuals tend to prefer risk-averse alternatives when the outcomes are framed in terms of gains, but shift to preferring risk-seeking alternatives when the equivalent outcomes are framed in terms of losses (Druckman, 2001, p. 62), meaning that people will often reverse their preferences in identical scenarios depending on how they are described. The power of framing effects are a factor in what Thayler, Sunstein & Blaz (2013) refer to as “choice architecture,” which draws attention to the weight that display decisions carry in terms of the resulting decisions people make.
To enumerate but a few of the many other identified heuristics, according to Kahneman (2011), substitution occurs when a much simpler question is substituted for a difficult question, often without noticing, for example when an assessment of a candidate’s fitness for office, a complex and uncertain System 2 evaluation, is really carried out by an assessment of liking, which is an immediate System 1 response. The availability heuristic depends on “the ease with which relevant instances come to mind,” which can lead to errors when judging frequency and probability (Tversky & Kahneman, 1973, p. 202). Lastly, anchoring describes how judgments are affected by the suggestion of a baseline, even in cases where it is clearly identified as an entirely unrelated number. The anchor will be used as a starting point and bias occurs because adjustments tend to be insufficient (Tversky & Kahneman, 1974).
Time plays a role in decision-making on several levels. A general shift in perspective can be observed moving from before, through during, to after a decision is made, in which the evaluation criteria changes from ideal “desirability” at a temporal distance (before/after) to more concrete “feasibility” in the moment, as the saliency of the costs and benefits changes (Ariely & Zakay, 2001, p. 191). Time is also the medium in which decisions take place and a characteristic of those decisions, which may be static, isolated one-time events, or dynamic, ongoing processes (Ariely & Zakay, 2001, p. 194). Lastly, and perhaps most importantly, time pressure can significantly alter the decision-making process. “Time stress” generally leads to less information considered, increased use of non-compensatory choice strategies, and increased likelihood of poor judgment, though these effects vary by individual; in some cases moderate time stress can increase decisiveness and motivation (Ariely & Zakay, 2001, p. 197).
Emotional state and attitudes are another significant factor in the decision-making process. Even mild and fleeting emotions are likely to have an influence, which often can outlive the emotional experience itself in the form of dynamic decisions in which a series of choices is influenced by previous determinations, as well as further in the future, both indirectly through self-reinforcement and directly, as a model for the current decision (Andrade & Ariely, 2009).
Good design should reflect and accommodate the demands of metacognition and the predictable biases of decision-making. Explicit debiasing information and training has shown limited effectiveness (Milkman, Chugh, & Bazerman, 2009). A more promising approach is to instead focus on providing cues that shift people from System 1 to System 2 where appropriate; simply asking people to “consider the opposite” can have this effect (Milkman, Chugh, & Bazerman, 2009, p. 381). Another alternative is simply designing the information environment to leverage System 1 impulses instead, eliminating the need to fight people’s natural instincts altogether when possible. This is the approach of Thayler & Sunstein’s “nudges,” which is a pseudo-acronym for their six principles of good choice architecture: provide appropriate iNcentives, help people Understand the mapping between a choice and its effect on their welfare, set good Defaults, Give feedback, Expect error, and Structure complex choices to promote effective comparison and selection (Thayler, Sunstein & Blaz, 2013).
Like many eCommerce experiences, car buying through Carvana involves a complex decision-making process, in this case with significant financial and quality of life implications, which may supported or impeded by various aspects of the site design. Carvana’s basic model is an EBA process. When users browse for a vehicle, they are prompted to select two initial aspects, bodystyle and price range, for the first round of elimination (fig. 1). This provides good guidance to the consumer because these aspects are generally the most important, and so should be given the most weight, and are more likely to be non-negotiable, meaning that the risk of eliminating the alternative deemed “best” in a compensatory evaluation is low. On the other hand, the two additional filters displayed when the user selects “More Filters,” individual year and color, introduce additional risk of eliminating suitable options (fig. 2). Unless the consumer is dead set on a specific year or color, they will be best served by leaving these options unselected, but this information is not clearly communicated by the interface. Though it is initially hidden, it would perhaps be best to eliminate these additional filters altogether from the start page.
The search results page introduces additional options (fig. 3). In general, the wealth of detail coupled with selective display serves to support both satisficers and maximizers. Satisficers may make use of the most common and prominently displayed selection criteria, either filtering down their results to a manageable amount or at any point using the “Sort By” options to surface a number of “good enough” options according to their highest priority (e.g. price, year). For maximizers, there is a high level of detail available, presented in an organized way. One of the concerns that drive maximizers is identifying all of the potentially relevant attributes; for them, a system like this offers an advantage over the traditional car-buying experience because the full complement of possible attributes is provided, without relying on the user to supply and remember them all.
The various filters also assist with the process of selecting attributes and updating selections accordingly, but there is no means to assign relative weight to different attributes. In fact, the biggest drawback of this interface is the lack of any comparison tool for looking at the details of more than one car on a single screen. Choosing among several options simultaneously rather than considering options individually is more likely to activate System 2 reasoning and lead to better outcomes (Milkman, Chugh, & Bazerman, 2009, p. 381). In the absence of the activation of System 2 processes, individuals are more likely to substitute easier questions, such as “do I like it?” instead of “is this the best investment?” and be swayed by emotional responses, a common pitfall in car buying.
Responsible choice architecture is especially crucial in financial decisions. Fig. 4 shows an interactive feature for exploring financing options. Here, Thayler & Sunstein’s guideline of setting helpful defaults is violated. Carvana leverages temporal bias by setting the default term to the maximum of 72 months to achieve an attractively low monthly payment. Since people tend to discount future values, the difference between longer and shorter terms is less salient, leading them to prefer a lower payment to paying less interest over the life of the loan. Additionally, anchoring bias may also lower the likelihood of choosing a shorter term since the initial amount (the lowest possible) will be used as a point of comparison, making other options less attractive. The salience of the various information also fails to correspond to their relative importance; APR and total price are the more important pieces of information yet are displayed the smallest, while the potentially misleading monthly payment is the largest. A better display is found in their explanation of the APR, which effectively uses mapping to convert from a relatively inscrutable difference of a few tenths of a percentage to the much more comprehensible dollar values per month and per term (fig. 5).
Although human decision-making processes have evolved to be efficient and accurate, we are not purely rational agents. Humans err in decision-making in consistent and predictable ways, so the onus is on designers and other presenters of information be cognizant of this fact and to leverage our understanding of these tendencies to provide users with the best chance for successful outcomes. As seen in the example of Carvana, small choices may have large effects on people’s actions, so particularly when welfare and business outcomes may be at odds, careful consideration and ethical evaluation are called for.
Andrade, E. B., & Ariely, D. (2009). The enduring impact of transient emotions on decision making. Organizational Behavior and Human Decision Processes, 109(1), 1-8.
Ariely, D., & Zakay, D. (2001). A timely account of the role of duration in decision making. Acta psychologica, 108(2), 187-207.
Efklides, A. (2008). Metacognition: Defining its facets and levels of functioning in relation to self-regulation and co-regulation. European Psychologist, 13(4), 277-287. doi:10.1027/1016-9040.13.4.277
Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. American Psychologist, 34(10), 906-911. doi:10.1037/0003-066X.34.10.906
Iyengar, S. S., Wells, R. E., & Schwartz, B. (2006). Doing better but feeling worse: Looking for the “best” job undermines satisfaction. Psychological Science, 17(2), 143–150. https://doi.org/10.1111/j.1467- 9280.2006.01677.x
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Strauss and Giroux: New York, NY.
Kim, K., & Miller, E. (2017). Vulnerable maximizers: The role of decision difficulty.Judgment and Decision Making, 12(5), 516-526.
Lai, E. R. (2011). Metacognition: A literature review. Retrieved from Pearson Assessments: https://images.pearsonassessments.com/images/tmrs/Metacognition_Literature_Review_Final.pdf
Martinez, M. E. (2006). What is metacognition? Phi Delta Kappan, 87(9), 696-699.
Milkman, K. L., Chugh, D., & Bazerman, M. H. (2009). How can decision making be improved?. Perspectives on psychological science, 4(4), 379-383.
Nelson, T.O & Narens, L. (1990). Metamemory: a theoretical framework and new findings. In G. Bower (ed.) The psychology of learning and motivation: advances in research and theory, vol 26. New York: Academic Press.
Nenkov, G. Y., Morrin, M., Schwartz, B., Ward, A., & Hulland, J. (2008). A short form of the Maximization Scale: Factor structure, reliability and validity studies. Judgment and Decision Making, 3(5), 371-388.
Schraw, G., & Moshman, D. (1995). Metacognitive theories. Educational Psychology Review, 7(4), 351-371.
Schwartz, B., Ward, A., Monterosso, J., Lyubomirsky, S., White, K., & Lehman, D. R. (2002). Maximizing versus satisficing: Happiness is a matter of choice. Journal of Personality And Social Psychology, 83(5), 1178-1197. doi:10.1037/0022-35188.8.131.528
Simon, H. (1972). Theories of bounded rationality. In C. B. Rander, & R. Rodner (Eds.), Decisions and organization (161-176). Amsterdam: North Holland.
Straub, K. (2013, October). Decisions, decisions…What’s a poor user (and designer) to do? Human Factors International Newsletter. Retrieved fromhttp://www.humanfactors.com/newsletters/decisions_decisions_what%27s_a_poor_user_and_designer_to_do.asp
Thayler, R. H., Sunstein, C. R. & Blaz, J. P. (2013). Choice architecture. In E. Shafer (Ed.) The Behavioral Foundations of Public Policy (428-439), Princeton, NJ: Princeton University Press.
Tversky, A. (1972). Elimination by aspects: A theory of choice. Psychological Review, 79(4), 281-299. doi:10.1037/h0032955
Tversky, A. & Kahneman, D. (1973). Availability: A Heuristic for judging frequency and probability. Cognitive Psychology, 5, 202-232.
Tversky, A. & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131.
Tversky, A. & Kahneman, D.(1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), pp. 263-291.
Tversky, A. & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5, 297-323.