In the study of vision, visual short-term memory (VSTM) is one of three broad memory systems including iconic memory and long-term memory. VSTM is a type of short-term memory, but one limited to information within the visual domain.
The term VSTM refers in a theory-neutral manner to the non-permanent storage of visual information over an extended period of time.[1] The visuospatial sketchpad is a VSTM subcomponent within the theoretical model of working memory proposed by Alan Baddeley; in which it is argued that a working memory aids in mental tasks like planning and comparison.[2] [3] Whereas iconic memories are fragile, decay rapidly, and are unable to be actively maintained, visual short-term memories are robust to subsequent stimuli and last over many seconds. VSTM is distinguished from long-term memory, on the other hand, primarily by its very limited capacity.[4]
The introduction of stimuli which were hard to verbalize, and unlikely to be held in long-term memory, revolutionized the study of VSTM in the early 1970s.[5] [6] [7] The basic experimental technique used required observers to indicate whether two matrices,[6] [7] or figures,[5] separated by a short temporal interval, were the same. The finding that observers were able to report that a change had occurred, at levels significantly above chance, indicated that they were able to encode aspect of the first stimulus in a purely visual store, at least for the period until the presentation of the second stimulus. However, as the stimuli used were complex, and the nature of the change relatively uncontrolled, these experiments left open various questions, such as:
Much effort has been dedicated to investigating the capacity limits of VSTM. In a typical change-detection task, observers are presented with two arrays, composed of a number of stimuli. The two arrays are separated by a short temporal interval, and the task of observers is to decide if the first and second arrays are identical, or whether one item differs across the two displays. Performance is critically dependent on the number of items in the array. While performance is generally almost perfect for arrays of one or two items, correct responses invariably decline in a monotonic fashion as more items are added. Different theoretical models have been put forward to explain limits on VSTM storage, and distinguishing between them remains an active area of research.
A prominent class of model proposes that observers are limited by the total number of items which can be encoded, either because the capacity of VSTM itself is limited. This type of model has obvious similarities to urn models used in probability theory. In essence, an urn model assumes that VSTM is restricted in storage capacity to only a few items, k (often estimated to lie in the range of three-to-five in adults, though fewer in children[8]). The probability that a suprathreshold change will be detected is simply the probability that the change element is encoded in VSTM (i.e., k/N). This capacity limit has been linked to the posterior parietal cortex, the activity of which initially increases with the number of stimuli in the arrays, but saturates at higher set-sizes.[9] Although urn models are used commonly to describe performance limitations in VSTM, it is only recently that the actual structure of items stored has been considered. Luck and colleagues have reported a series of experiments designed specifically to elucidate the structure of information held in VSTM.[10] This work provides evidence that items stored in VSTM are coherent objects, and not the more elementary features of which those objects are composed.
An alternative framework has more been put forward by Wilken and Ma who suggest that apparent capacity limitations in VSTM are caused by a monotonic decline in the quality of the internal representations stored (i.e., monotonic increase in noise) as a function of set-size. In this conception capacity limitations in memory are not caused by a limit on the number of things that can be encoded, but by a decline in the quality of the representation of each thing as more things are added to memory. In their 2004 experiments, they varied color, spatial frequency, and orientation of objects stored in VSTM using a signal detection theory approach. The participants were asked to report differences between the visual stimuli presented to them in consecutive order. The investigators found that different stimuli were encoded independently and in parallel, and that the major factor limiting report performance was neuronal noise (which is a function of visual set-size).[11]
Under this framework, the key limiting factor on working memory performance is the precision with which visual information can be stored, not the number of items that can be remembered.[11] Further evidence for this theory was obtained by Bays and Husain using a discrimination task. They showed that, unlike a "slot" model of VSTM, a signal-detection model could account both for discrimination performance in their study and previous results from change detection tasks. These authors proposed that VSTM is a flexible resource, shared out between elements of a visual scene—items that receive more resource are stored with greater precision. In support of this, they showed that increasing the salience of one item in a memory array led to that item being recalled with increased resolution, but at the cost of reducing resolution of storage for the other items in the display.[12]
Psychophysical experiments suggest that information is encoded in VSTM across multiple parallel channels, each channel associated with a particular perceptual attribute.[13] Within this framework, a decrease in an observer's ability to detect a change with increasing set-size can be attributed to two different processes:
However, the Greenlee-Thomas model[14] suffers from two failings as a model for the effects of set-size in VSTM. First, it has only been empirically tested with displays composed of one or two elements. It has been shown repeatedly in various experimental paradigms that set-size effects differ for displays composed of a relatively small number of elements (i.e., 4 items or less), and those associated with larger displays (i.e., more than 4 items). The Greenlee-Thomas model offers no explanation for why this might be so. Second, while Magnussen, Greenlee, and Thomas are able to use this model to predict that greater interference will be found when dual decisions are made within the same perceptual dimension, rather than across different perceptual dimensions, this prediction lacks quantitative rigor, and is unable to accurately anticipate the size of the threshold increase, or give a detailed explanation of its underlying causes.
In addition to the Greenlee-Thomas model, there are two other prominent approaches for describing set-size effects in VSTM. These two approaches can be referred to as sample size models,[17] and urn models. They differ from the Greenlee-Thomas model by:
There is some evidence of an intermediate visual store with characteristics of both iconic memory and VSTM.[18] This intermediate store is proposed to have high capacity (up to 15 items) and prolonged memory trace duration (up to 4 seconds). It coexists with VSTM but unlike it visual stimuli can overwrite the contents of its visual store.[19] Further studies suggests an involvement of visual area V4 in the retention of information about the color of the stimulus in visual working memory,[20] [21] and the role of the VO1 area for retaining information about its shape. It has been shown that in the VO2 region all characteristics of the stimulus retained in memory are combined into a holistic image.
VSTM is thought to be the visual component of the working memory system, and as such it is used as a buffer for temporary information storage during the process of naturally occurring tasks. But what naturally occurring tasks actually require VSTM? Most work on this issue has focused on the role of VSTM in bridging the sensory gaps caused by saccadic eye movements. These sudden shift of gaze typically occur 2–4 times per second, and vision is briefly suppressed while the eyes are moving. Thus, the visual input consists of a series of spatially shifted snapshots of the overall scene, separated by brief gaps. Over time, a rich and detailed long-term memory representation is constructed from these brief glimpses of the input, and VSTM is thought to bridge the gaps between these glimpses and to allow the relevant portions of one glimpse to be aligned with the relevant portions of the next glimpse. Both spatial and object VSTM systems may play important roles in the integration of information across eye movements. Eye movements are also affected by VSTM representations. The constructed representations held in VSTM can affect eye movements even when the task does not explicitly require eye movements: the direction of small microsaccades point towards the location of objects in VSTM.[22]