Dynamic decision-making explained

Dynamic decision-making (DDM) is interdependent decision-making that takes place in an environment that changes over time either due to the previous actions of the decision maker or due to events that are outside of the control of the decision maker.[1] [2] In this sense, dynamic decisions, unlike simple and conventional one-time decisions, are typically more complex and occur in real-time and involve observing the extent to which people are able to use their experience to control a particular complex system, including the types of experience that lead to better decisions over time.[3]

Overview

Dynamic decision making research uses computer simulations which are laboratory analogues for real-life situations. These computer simulations are also called “microworlds”[4] and are used to examine people's behavior in simulated real world settings where people typically try to control a complex system where later decisions are affected by earlier decisions.[5] The following differentiate DDM research from more classical forms of decision making research of the past:

Also, the use of microworlds as a tool to investigate DDM not only provides experimental control to DDM researchers but also makes the DDM field contemporary unlike the classical decision making research which is very old.

Examples of dynamic decision making situations include managing climate change, factory production and inventory, air traffic control, firefighting, and driving a car, military command and control in a battle field. Research in DDM has focused on investigating the extent to which decision makers use their experience to control a particular system; the factors that underlie the acquisition and use of experience in making decisions; and the type of experiences that lead to better decisions in dynamic tasks.

Characteristics of dynamic decision-making environments

The primary characteristics of dynamic decision environments are dynamics, complexity, opaqueness, and dynamic complexity. The dynamics of the environments refers to the dependence of the system's state on its state at an earlier time. Dynamics in the system could be driven by positive feedback (self-amplifying loops) or negative feedback (self-correcting loops), examples of which could be the accrual of interest in a saving bank account or the assuage of hunger due to eating respectively.

Complexity largely refers to the number of interacting or interconnected elements within a system that can make it difficult to predict the behavior of the system. But the definition of complexity could still have problems as system components can vary in terms of how many components there are in the system, number of relationships between them, and the nature of those relationships. Complexity may also be a function of the decision maker's ability.

Opaqueness refers to the physical invisibility of some aspects of a dynamic system and it might also be dependent upon a decision maker's ability to acquire knowledge of the components of the system.

Dynamic complexity refers to the decision maker's ability to control the system using the feedback the decision maker receives from the system. Diehl and Sterman[6] have further broken down dynamic complexity into three components. The opaqueness present in the system might cause unintended side-effects. There might be non-linear relationships between components of a system and feedback delays between actions taken and their outcomes. The dynamic complexity of a system might eventually make it hard for the decision makers to understand and control the system.

Microworlds in DDM research

A microworld is a complex simulation used in controlled experiments designed to study dynamic decision-making. Research in dynamic decision-making is mostly laboratory-based and uses computer simulation microworld tools (i.e., Decision Making Games, DMGames). The microworlds are also known by other names, including synthetic task environments, high fidelity simulations, interactive learning environments, virtual environments, and scaled worlds. Microworlds become the laboratory analogues for real-life situations and help DDM investigators to study decision-making by compressing time and space while maintaining experimental control.

The DMGames compress the most important elements of the real-world problems they represent and are important tools for collecting human actions DMGames have helped investigate a variety of factors, such as cognitive ability, type of feedback, timing of feedback, strategies used while making decisions, and knowledge acquisition while performing DDM tasks. However, even though DMGames aim to represent the essential elements of real-world systems, they differ from the real-world task in various respects. Stakes might be higher in real-life tasks and expertise of the decision maker has often been acquired over a period of many years rather than minutes, hours or days as in DDM tasks. Thus, DDM differs in many respects from naturalistic decision-making (NDM).

In DDM tasks people have been shown to perform below the optimal levels of performance, if an optimal could be ascertained or known. For example, in a forest firefighting simulation game, participants frequently allowed their headquarters to be burned down.[7] In similar DDM studies participants acting as doctors in an emergency room allowed their patients to die while they kept waiting for results of test that were actually non-diagnostic.[8] [9] An interesting insight into decisions from experience in DDM is that mostly the learning is implicit, and despite people's improvement of performance with repeated trials they are unable to verbalize the strategy they followed to do so.[10]

Theories of learning in dynamic decision making tasks

Learning forms an integral part of DDM research. One of the main research activities in DDM has been to investigate using microworlds simulations tools the extent to which people are able to learn to control a particular simulated system and investigating the factors that might explain the learning in DDM tasks.

Strategy-Based Learning Theory

One theory of learning relies on the use of strategies or rules of action that relate to a particular task. These rules specify the conditions under which a certain rule or strategy will apply. These rules are of the form if you recognize situation S, then carry out action/strategy A. For example, Anzai[11] implemented a set of production rules or strategies which performed the DDM task of steering a ship through a certain set of gates. The Anzai strategies did reasonably well to mimic the performance on the task by human participants. Similarly, Lovett and Anderson[12] have shown how people use production rules or strategies of the if – then type in the building-sticks task which is an isomorph of Lurchins' waterjug problem.[13] [14] The goal in the building-sticks task is to construct a stick of a particular desired length given three stick lengths from which to build (there is an unlimited supply of sticks of each length). There are basically two strategies to use in trying to solve this problem. The undershoot strategy is to take smaller sticks and build up to the target stick. The overshoot strategy is to take the stick longer than the goal and cut off pieces equal in length to the smaller stick until one reaches the target length. Lovett and Anderson arranged it so that only one strategy would work for a particular problem and gave subjects problems where one of the two strategies worked on a majority of the problems (and she counterbalanced over subjects which was the more successful strategy).

Connectionism learning theory

Some other researchers have suggested that learning in DDM tasks can be explained by a connectionist theory or connectionism. The connections between units, whose strength or weighing depend upon previous experience. Thus, the output of a given unit depends upon the output of the previous unit weighted by the strength of the connection. As an example, Gibson et al.[15] has shown that a connectionist neural network machine learning model does a good job to explain human behavior in the Berry and Broadbent's Sugar Production Factory task.

Instance-based learning theory

The Instance-Based Learning Theory (IBLT) is a theory of how humans make decisions in dynamic tasks developed by Cleotilde Gonzalez, Christian Lebiere, and Javier Lerch.[3] The theory has been extended to two different paradigms of dynamic tasks, called sampling and repeated-choice, by Cleotilde Gonzalez and Varun Dutt.[16] Gonzalez and Dutt [16] have shown that in these dynamic tasks, IBLT provides the best explanation of human behavior and performs better than many other competing models and approaches. According to IBLT, individuals rely on their accumulated experience to make decisions by retrieving past solutions to similar situations stored in memory. Thus, decision accuracy can only improve gradually and through interaction with similar situations.

IBLT assumes that specific instances or experiences or exemplars are stored in the memory.[17] These instances have a very concrete structure defined by three distinct parts which include the situation, decision, and utility (or SDU):

In addition to a predefined structure of an instance, IBLT relies on the global, high-level decision making process, consisting of five stages: recognition, judgment, choice, execution, and feedback.[16] When people are faced with a particular environment's situation, people are likely to retrieve similar instances from memory to make a decision. In atypical situations (those that are not similar to anything encountered in the past), retrieval from memory is not possible and people would need to use a heuristic (which does not rely on memory) to make a decision. In situations that are typical and where inss can be retrieved, evaluation of the utility of the similar instances takes place until a necessity level is crossed.[16]

Necessity is typically determined by the decision maker's “aspiration level,” similar to Simon and March's satisficing strategy. But the necessity level might also be determined by external environmental factors like time constraints (as in the medical domain with doctors in an emergency room treating patients in a time critical situation). Once that necessity level is crossed, the decision involving the instance with the highest utility is made. The outcome of the decision, when received, is then used to update the utility of the instance that was used to make the decision in the first place (from expected to experienced). This generic decision making process is assumed to apply to any dynamic decision making situation, when decisions are made from experience.

The computational representation of IBLT relies on several learning mechanisms proposed by a generic theory of cognition, ACT-R. Currently, there are many decision tasks that have been implemented in the IBLT that reproduces and explains human behavior accurately.[18] [19]

Research topics in dynamic decision-making

Feedback in dynamic decision-making tasks

Although feedback interventions have been found to benefit performance on DDM tasks, outcome feedback has been shown to work for tasks that are simple, require lower cognitive abilities, and that are repeatedly practiced.[20] For example, IBLT suggests that in DDM situations, learning from only outcome feedback is slow and generally ineffective.[21]

Effects of feedback delays in DDM tasks

The presence of feedback delays in the DDM tasks and its misperceptions by the participants contributes to less than optimal performance on DDM tasks.[22] Such delays in feedback make it harder for people to understand the relationships that govern the system dynamics of the task due to the delay between the actions of the decision makers and the outcome from the dynamic system.

A familiar example of the effect of feedback delays is the Beer Distribution Game (or Beer Game). There is a time delay built into the game between placing an order by a role and reception of the ordered cases of beer. If a role runs out of beer (i.e., unable to satisfy a customer's current demand for beer cases), there is a fine of $1 per case. This might lead people to overstock beer to satisfy any future unanticipated demands. Results, contrary to economic theory which predicts a long term stable equilibrium, show people ordering too much. This happens because the time delay between placing an order and receiving inventory makes people think that the inventory is running out as new orders come in, so they react and place larger orders. Once they build up the inventory and realize the incoming orders they drastically cut future orders which leads the beer industry experience oscillating patterns of over-ordering and under-ordering, that is, costly cycles of boom and bust.

Similar examples on effects of feedback delay have been reported among fire fighters in a fire fighting game called NEWFIRE in the past where on account of task complexity and feedback delay between actions of firefighters and outcomes, led participants to frequently allow their headquarters to be burned down.

Effects of proportional thinking in DDM Tasks

Growing evidence in DDM indicates that adults share a robust problem in understanding some of the basic building blocks of simple dynamic systems, including stocks, inflows, and outflows. Many adults have shown a failure to interpret a basic principle of dynamics: a stock (or accumulation) rises (or falls) when the inflow exceeds (or is less than) the outflow. This problem, termed Stock-Flow failure (SF Failure), has been shown to be persistent even in simple tasks, with well motivated participants, in familiar contexts and simplified information displays. The belief that the stock behaves like the flows is a common but wrong heuristic (named the “correlation heuristic") that people often use when judging non-linear systems.[23] The use of correlation heuristic or proportional reasoning is widespread across different domains and has been found to be a robust problem in both school children and educated adults (Cronin et al. 2009; Larrick & Soll, 2008; De Bock 2002; Greer, 1993; Van Dooren et al., 2005; Van Dooren et al., 2006; Verschaffel et al., 1994).

Individual Differences in DDM

Individual performance on DDM tasks is accompanied by tremendous amount of variability, which might be a result of the varying amount of skill and cognitive abilities of individuals who interact with the DDM tasks. Although individual differences exist and are often shown on DDM tasks, there has been a debate on whether these differences arise as a result of differences in cognitive abilities. Some studies have failed to find evidence of a link between cognitive abilities as measured by intelligence tests and performance on DDM tasks. But later studies contend that this lack is due to absence of reliable performance measures on DDM tasks.[24] [25]

Other studies have suggested a relationship between workload and cognitive abilities.[26] It was found that low ability participants are generally outperformed by high ability participants. Under demanding conditions of workload, low ability participants do not show improvement in performance in either training or test trials. Evidence shows that low ability participants use more heuristics particularly when the task demands faster trials or time pressure and this happens both during training and test conditions.[27]

DDM in the real world

In connection to DDM using laboratory microworld tools to investigate decision making there has also been a recent emphasis in DDM research to focus on decision making in the real world. This does not discount research in the laboratory but reveals the broad conception of the research underlying DDM. Under the DDM in the real world people are more interested in processes like goal setting, planning, perceptual and attention processes, forecasting, comprehension processes and many others including attending to feedback. The study of these processes brings DDM research closer to situation awareness and expertise.

For example, it has been shown in DDM research that motorists who have more than 10 years of experience or expertise (in terms years of driving experience) are faster to respond to hazards than drivers with less than three years of experience.[28] Also, owing to their greater experience, such motorists tend to perform a more effective and efficient search for hazards cues than their not so experienced counterparts.[29] A way to explain such behavior is based upon the premise that situation awareness in DDM tasks makes certain behaviors automatic for people with expertise. In this regard, the search for cue in the environment that could possibly lead to hazards for experienced motorists might be an automatic process whereas lack of situation awareness among novice motorists might lead them to a conscious non-automatic effort to find such cues leading them to become more prone to hazards by not noticing them at all. This behavior has also been documented for pilots and platoon commanders.[30] The considerations of novice and experienced platoon commanders in a virtual reality battle simulator has shown that more experience was associated with higher perceptual skills, higher comprehension skills. Thus, experience on different DDM tasks makes a decision maker more situational aware with higher levels of perceptual and comprehension skills.

See also

Related fields

References

  1. Brehmer, B. (1992). Dynamic decision making: Human control of complex systems. Acta Psychologica, 81(3), 211–241.
  2. Edwards, W. (1962). Dynamic decision theory and probabilistic information processing. Human Factors, 4, 59–73.
  3. Gonzalez, C., Lerch, J. F., & Lebiere, C. (2003). Instance-based learning in dynamic decision making. Cognitive Science, 27(4), 591–635.
  4. Turkle, S. (1984). The second self: Computers and the human spirit. London: Granada.
  5. Gonzalez, C., Vanyukov, P., & Martin, M. K. (2005). The use of microworlds to study dynamic decision making. Computers in Human Behavior, 21(2), 273–286.
  6. Diehl, E., & Sterman, J. D. (1995). Effects of feedback complexity on dynamic decision making. Organizational Behavior and Human Decision Processes, 62(2), 198–215.
  7. Brehmer, B., & Allard, R. (1991). Real –time dynamic decision making: Effects of task complexity and feedback delays. In J. Rasmussen, B. Brehmer & J. Leplat (Eds.), Distributed decision making: Cognitive models for cooperative work. Chichester: Wiley.
  8. Gonzalez, C., & Vrbin, C. (2007). Dynamic simulation of medical diagnosis: Learning in the medical decision making and learning environment MEDIC. In A. Holzinger (Ed.), Usability and HCI for medicine and health care: Third symposium of the workgroup human-computer interaction and usability engineering of the Austrian Computer Society, USAB 2007 (Vol. 4799, pp. 289–302). Germany: Springer.
  9. Kleinmuntz, D., & Thomas, J. (1987). The value of action and inference in dynamic decision making. Organizational Behavior and Human Decision Processes, 62, 63–69.
  10. Berry, B. C., & Broadbent, D. E. (1984). On the relationship between task performance and associated verbalizable knowledge. Quarterly Journal of Experimental Psychology, 36A, 209–231.
  11. Anzai, Y. (1984). Cognitive control of real-time event driven systems. Cognitive Science, 8, 221–254.
  12. Lovett, M. C., & Anderson, J. R. (1996). History of success and current context in problem solving: Combined influences on operator selection. Cognitive Psychology, 31, 168–217.
  13. Lurchins, A. S. (1942). Mechanization in problem solving. Psychological Monographs, 54(248).
  14. Lurchins, A. S., & Lurchins, E. H. (1959). Rigidity of behaviour: A variational approach to the effects of Einstellung. Eugene, OR: University of Oregon Books.
  15. Gibson, F. P., Fichman, M., & Plaut, D. C. (1997). Learning in dynamic decision tasks: Computational model and empirical evidence. Organizational Behavior and Human Decision Processes, 71(1), 1–35.
  16. Gonzalez, C., & Dutt, V. (2011). Instance-based learning: Integrating sampling and repeated decisions from experience. Psychological Review, 118(4), 523-551.
  17. Dienes, Z., & Fahey, R. (1995). Role of specific instances in controlling a dynamic system. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21(4), 848–862.
  18. Gonzalez, C., & Lebiere, C. (2005). Instance-based cognitive models of decision making. In D. Zizzo & A. Courakis (Eds.), Transfer of knowledge in economic decision-making. Palgrave Macmillan.
  19. Martin, M. K., Gonzalez, C., & Lebiere, C. (2004). Learning to make decisions in dynamic environments: ACT-R plays the beer game. In M. C. Lovett, C. D. Schunn, C. Lebiere & P. Munro (Eds.), Proceedings of the Sixth International Conference on Cognitive Modeling (Vol. 420, pp. 178–183). Pittsburgh, PA: Carnegie Mellon University/University of Pittsburgh: Lawrence Erlbaum Associates Publishers.
  20. Kluger, A. N. & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284.
  21. Gonzalez, C. (2005). Decision support for real-time dynamic decision making tasks. Organizational Behavior and Human Decision Processes, 96, 142–154.
  22. Sterman, J. D. (1989). Misperceptions of feedback in dynamic decision making. Organizational Behavior and Human Decision Processes, 43(3), 301–335.
  23. Cronin, M., & Gonzalez, C., & Sterman, J. D. (2009). Why don't well-educated adults understand accumulation? A challenge to researchers, educators and citizens. Organizational Behavior and Human Decision Processes, 108(1), 116–130.
  24. Rigas, G., Carling, E., & Brehmer, B. (2002). Reliability and validity of performance measures in microworlds. Intelligence, 30(5), 463–480.
  25. Gonzalez, C., Thomas, R. P., & Vanyukov, P. (2005). The relationships between cognitive ability and dynamic decision making. Intelligence, 33(2), 169–186.
  26. Gonzalez, C. (2005b). The relationship between task workload and cognitive abilities in dynamic decision making. Human Factors, 47(1), 92–101.
  27. Gonzalez, C. (2004). Learning to make decisions in dynamic environments: Effects of time constraints and cognitive abilities. Human Factors, 46(3), 449–460.
  28. McKenna, F. P, & Crick, J. (1991). Experience and expertise in hazard perception. In G.B. Grayson & J. F. Lester (Eds.), Behavioral Research in Road Safety (pp. 39–45).Crowthorne, UK: Transport and Road Research Laboratory.
  29. Horswill, M. S., McKenna, F. P. (2004). Drivers' hazard perception ability: Situation awareness on the road. In S. Banbury & S. Tremblay (Eds.), A cognitive approach to situation awareness: Theory and application (pp. 155–175). Aldershot, England: Ashgate.
  30. Endsley, M. R. (2006). Expertise and situation awareness. In K. A. Ericcson, N. Charness, P. J. Feltovich & R. R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance (pp. 633–651). Cambridge: Cambridge University Press.

External links