Department of Psychology, Humboldt-Universität zu Berlin;
Martin Rolfs
Department of Psychology, Humboldt-Universität zu Berlin
Acknowledgement: We have no known conflict of interest to disclose. All data are publicly available via the Open Science Framework (
Natural environments typically provide a rich spatiotemporal context for visual events. When you are watching a car in traffic, you can refer to this car not only by its model or color, but also by a variety of spatial or temporal properties; for example, you can say that it is the one that passed the traffic lights two seconds ago, the one behind the blue car, the fastest one in view, the one at a 10 o’clock direction, or the last one that entered the lane. Space and time are ubiquitous dimensions that shape how we perceive and describe the world. Here, we were interested in how spatiotemporal context shapes how we remember visual information over short periods of time.
It has long been known that space is a particularly important feature dimension in visual working memory, and that this includes not only the spatial location of a specific object, but also its spatial context. For example, spatial representations are spontaneously created and maintained, even when task-irrelevant (e.g.,
We have recently shown that both spatial and temporal properties are incidentally encoded and functionally relevant, providing reference frames for storage and retrieval (
The task that we used in our previous work (
While, overall, we know surprisingly little about contextual reference frames in visual working memory, studies addressing this issue have almost exclusively focused on spatial configurations. The typical approach to study if the spatial configuration present at encoding is used to support memory has been to remove (parts of) that configuration at retrieval, or to change some or all of the item positions and thereby the overall configuration (e.g.,
To determine if memory is supported by bindings between items and their absolute spatial positions, previous studies used global transformations of the spatial configuration. Global changes affect absolute item positions by expanding, shrinking, or shifting the entire configuration, but not interitem relations. As such changes essentially emulate the consequences of a change in viewer distance or of an eye movement, it would be highly detrimental if visual working memory was not invariant to this type of transformation. In fact, spatial transformations of this kind do not seem to interfere with memory performance (
Spatial relations between items, by contrast, appear to be encoded along with each item to support their maintenance and retrieval. In a number of studies, relative transformations of spatial configurations, which affect both the absolute and relative locations of the individual items (e.g., random scrambling of all item positions), have been shown to impair memory, even when the spatial configuration was task-irrelevant or when there were explicit instructions to ignore any configurational properties (e.g.,
Overall, however, the evidence regarding the importance of relational spatial coding for visual working memory is mixed. There are also a few studies that failed to observe consistent and significant impairments as a result of relational changes of the spatial configuration (
How transformations of the spatial order of items (e.g., clockwise, when items are arranged on a circle) affect memory performance has not been directly tested. It has been shown, though, that configurational representations do not just comprise the spatial layout but also the bindings of individual items (e.g., object identities or surface features like color or shape) to each location: Memory for a given item suffers when its context is made up of the same locations but the identities of the items at those locations have been removed or switched (
For the temporal domain, the critical properties for reference frames in visual working memory are unknown. The temporal structure of visual events is often mainly thought of as temporal order, especially with respect to feature binding—here, serial position might take over the role of spatial location in indexing bound objects (
In sum, previous work indicates that interitem relations, rather than absolute item locations, define spatial reference frames in visual working memory. The goal of this study was to corroborate and extend these findings for the spatial domain, and, most importantly, to determine if the same holds true for the temporal domain.
In a spatiotemporal variant of a color-change detection task, we had participants memorize four colors that were presented at different locations, sequentially and at different interstimulus-intervals (ISIs). As task-relevant color changes always involved a new color that had not been present in the respective trial, the task did not require item colors to be bound to spatial or temporal properties (nor was that, in principle, advantageous for solving the task, because spatial and/or temporal positions were likely to change). As a consequence, participants were instructed to just focus on the colors. Nonetheless, items could be differentiated based on (a) their position in a categorical spatial or temporal order, (b) their spatial or temporal coordinates relative to the other items or (c) absolute spatial or temporal coordinates. To determine specifically which of these properties are critical for reference frames in visual working memory, we applied different types of transformations to either the spatial structure, the temporal structure or to both spatial and temporal structures of item presentation at retrieval: (a) ordinal transformations (Experiments 1 and 4), (b) relational transformations (Experiments 2 and 5), and (c) global transformations (Experiment 3). Such transformations of the external frame of reference at retrieval should only affect memory performance if the metric of the internal reference frame in visual working memory is not invariant to the specific type of transformation. In all experiments, a “no transformation” condition with intact spatial and temporal structures at retrieval was included to provide a baseline.
In a first experiment, we applied ordinal transformations to the spatial and/or temporal structure: All four items switched spatial locations and/or temporal positions in the sequence at retrieval, so that each item had a different location and/or serial position than it had at encoding. This type of transformation affected the spatial and temporal structures drastically—it changed absolute and relative item positions as well as the order of items in a spatial (e.g., clockwise) or temporal sequence (i.e., order of appearance/serial position).
Participants
Twenty volunteers participated in the experiment for course credit or monetary compensation (8.50€/hour). We determined sample size based on the effect sizes observed in our prior study using a similar paradigm (
Apparatus and Stimuli
The experiment was conducted in a dark, sound-attenuated room. Participants placed their head on a chin and forehead rest to face the monitor (ViewPixx/3D monitor, 24”, 1,920 × 1,080 pixels) at a viewing distance of 53 cm. Stimulus presentation and response collection were controlled using Matlab (Mathworks, Natick, MA) and the Psychophysics Toolbox 3 (
On each trial, four different colors were randomly chosen from a set of seven approximately equiluminant colors (CIE coordinates x/y; luminance): blue (.093/.347; 48.95 cd/m
Procedure and Design
The trial procedure is illustrated in
Transformation conditions of the spatial and temporal structures (intact vs. transformed) were fully crossed, yielding four different conditions: (a) intact spatial and temporal structures, (b) transformed spatial structure and intact temporal structure, (c), intact spatial structure and transformed temporal structure, and (d) transformed spatial and temporal structures. Each participant performed 96 trials for each transformation condition (50% color-change trials, color changes equally likely to occur for each of the four items), yielding 384 trials in total. Transformation conditions varied randomly from trial to trial. Between blocks of 48 trials each as well as in the middle of each block, participants had the opportunity to take a short break. After each block, they received feedback about their performance (percentage of correct responses).
Data Analysis
Our primary measure of interest was the sensitivity to detect a change [d′ = z(hit rate) − z(false alarm rate)], but we additionally analyzed mean reaction time (RT). We corrected hit or false alarm rates of 0 by replacing them with .5/n, and rates of 1 by replacing them with (n − .5)/n (
Individual measures were submitted to repeated measures analyses of variance (ANOVA) with the factors spatial structure (intact vs. transformed) and temporal structure (intact vs. transformed). To specifically test which transformation type(s; spatial, temporal, spatial + temporal) memory performance was sensitive to, we then tested performance in each transformation condition against performance in the baseline condition (intact spatial and temporal structures) with planned one-tailed t-tests (not corrected for multiple comparisons). For nonsignificant effects of interest (i.e., when spatial or temporal transformations were found not to significantly affect memory performance), we additionally computed Bayes Factors indicating the evidence in support of the null hypothesis over the alternative hypothesis (BF01) using the default settings of JASP (Version .9.1;
Transparency and Openness
We report how we determined our sample size, all data exclusions, all manipulations, and all measures in the study. All data are available via the Open Science Framework (
Ordinal transformations of either the spatial or temporal structure impaired participants’ sensitivity to detect a change (spatial: F(1, 18) = 13.23, p = .002, ηp
These results show that visual working memory is sensitive to ordinal transformations of both spatial as well as temporal configurations, indicating that the ordinal position of an item in a spatial or temporal sequence is encoded to support memory—even when item order is not task-relevant.
In a second step, we manipulated spatial and/or temporal relations at retrieval by multiplying the spatial distances and/or ISIs between all items at encoding by different factors. These relational transformations affected absolute positions and relative interitem distances. The order of items in a spatial or temporal sequence, however, remained the same.
Unless stated otherwise, the methods of Experiment 2 were identical to those of Experiment 1.
Participants
Twenty volunteers (18 women, 2 men; M age: 26 years; age range: 20–34 years) participated in the experiment. Four of them had already participated in Experiment 1.
Apparatus and Stimuli
We predefined six spatial configurations based on 24 locations that were arranged at 23°, 45°, and 67° in each quadrant on two imaginary circles at eccentricities of 4.68 and 5.23 dva. Every configuration consisted of four items, one in each quadrant and at different interitem distances. We then assigned configurations to one of two sets of three configurations each, which all had different relative interitem distances (Euclidean distances; for example, in one configuration, the relative distances between items A to D, designated as A-B, B-C, etc., were as follows: D–A < A–B < B–C < C–D; in another configuration of the same set, the relative distances were: C–D < B–C < D–A < A–B). In trials with a relational transformation of the spatial structure, one of the other two configurations in the same set as the configuration used for the memory array was randomly selected for the test array (see
Procedure and Design
The trial procedure and design of Experiment 2 was identical to Experiment 1, except that relational instead of ordinal transformations were applied to the temporal and/or spatial structure of item presentation at retrieval (see Apparatus and Stimuli and
Data Analysis
Using the same criteria as in Experiment 1, RT outliers (2.70% of all trials) were removed from the data.
The pattern observed for relational transformations of either the spatial and/or temporal structure mirrored the pattern observed for ordinal transformations in Experiment 1, albeit memory decrements were generally less pronounced. Relational transformations of the temporal structure of item presentation at retrieval relative to its structure at encoding reduced the sensitivity to detect a change (F(1, 19) = 5.80, p = .026, ηp
Overall, the memory costs associated with relational transformations were rather small. Unlike ordinal transformations, however, relational transformations can be more or less pronounced, which likely affects how much they interfere with memory. As the relational transformations that we applied were relatively mild—for instance, spatial positions only changed within their quadrants, and ISIs were switched instead of radically different, which only subtly changed the “rhythm” of item presentation—the observed effects may represent a lower bound on the range of possible effects.
To substantiate this idea that memory costs increase with the magnitude of relational changes in spatial or temporal structures, we additionally analyzed performance separately for trials with small and large relational changes. We quantified the relational change by representing the distances between items (the four spatial distances between items A-B, B-C, C-D, and D-A or the three temporal distances, ISIs, between items 1–2, 2–3, and 3–4, respectively) as vectors in four- or three-dimensional space (for spatial or temporal distances, respectively). Specifically, we calculated the angle between these vectors at encoding and at retrieval. Larger angular differences indicate larger relational changes in the spatial or temporal structure. For example, in a trial with a relational transformation of the temporal structure, the vectors representing ISIs at encoding (600, 300, 100 ms) and at retrieval (300, 100, 600 ms) are of the same length (i.e., [100
A repeated measures ANOVA (no change vs. small change vs. large change) revealed that performance scaled with the magnitude of the relational change (
These findings provide a first piece of evidence that memory impairments due to relational transformations of spatiotemporal context increase with the magnitude of the change that these transformations induce. Here, this pattern was observed even though the relational changes and their variation were relatively small (i.e., small and large relational changes did not differ that much). In any case, it is evident that memory performance was sensitive to these subtle relational transformations of spatial or temporal configurations, revealing that relative spatial or temporal distances (i.e., intervals) between items are included in reference frames as well.
In the third experiment, we changed spatial and/or temporal structures at retrieval globally: spatial and/or temporal coordinates at encoding were multiplied by the same factor, expanding or shrinking the entire configuration. While this type of transformation affected absolute item positions in space or time, relational spatial or temporal properties—that is, positions relative to other items and item order—remained intact.
Unless stated otherwise, the methods of Experiment 3 were identical to those of Experiment 1.
Participants
Twenty volunteers (15 women, 5 men; M age: 26 years; age range: 18–31 years) participated in the experiment. Four of them had already participated in Experiments 1 and 2, two only in Experiment 1, and one only in Experiment 2.
Apparatus and Stimuli
For item presentation at encoding (i.e., in the memory array), we used the same six spatial configurations as in Experiment 2 and six temporal configurations, which were permutations of ISIs of 200, 400, and 800 ms. On each trial, one spatial and one temporal configuration was chosen randomly and independently. For item presentation at retrieval (i.e., in the test array), the eccentricities of spatial locations and ISIs between items were either all identical to those at encoding (no transformation) or they were all equally decreased or increased by one quarter or one half of their magnitude (global spatial or temporal transformation). That is, the entire configuration was either shrunk or expanded (equally likely, randomly chosen on each trial). For example, a temporal ISI configuration of 400–800-200 ms was either shrunk to 300–600-150 ms or to 200–400-100 ms, or it was expanded to 500–1,000-250 ms or to 600–1,200-300 ms. Examples of global transformations of the spatial or temporal structure are shown in
Procedure and Design
The trial procedure and design of Experiment 3 was identical to Experiment 1, except that global instead of ordinal transformations were applied to the temporal and/or spatial structure of item presentation at retrieval (see Apparatus and Stimuli and
Data Analysis
Using the same criteria as in Experiment 1, RT outliers (2.77% of all trials) were removed from the data.
Global transformations of the entire configuration, which left relative interitem relations intact, did not affect the sensitivity to detect a change (
To ensure that no effect of global transformations in a certain subset of trials (e.g., trials with a shrinkage of the configuration or trials with a larger global change) would go undetected, we performed two additional analyses. First, we compared sensitivity in trials with a shrinkage of the spatial or temporal configuration with trials with an expansion of either configuration (Figure S1 in the online supplemental materials). An ANOVA with the factors dimension (spatial vs. temporal) and transformation type (shrinkage vs. expansion) revealed that sensitivity did not depend on whether the configuration was shrunk or expanded (F(1, 19) = 1.58, p = .225, BF01 = 2.32). There was no interaction between transformation type and dimension (F(1, 19) = 2.22, p = .153). Sensitivity did not differ from the baseline condition with intact spatial and temporal structures in any of these transformation conditions (two-tailed t-tests; spatial shrinkage: t(19) = −.34, p = .741, BF01 = 4.09; spatial expansion: t(19) = −.37, p = .716, BF01 = 4.05; temporal shrinkage: t(19) = −.80, p = .431, BF01 = 3.23; temporal expansion: t(19) = .98, p = .339, BF01 = 2.81).
Second, as global transformations, just like relational transformations, can be more or less pronounced, we additionally analyzed sensitivity as a function of change magnitude. We used the same approach as in Experiment 2, except that we focused on a different vector property: Global changes in spatiotemporal structures affect the lengths of the vectors representing spatial or temporal distances between items, with larger absolute length differences reflecting larger changes in either direction (expansion or shrinkage). For example, in a trial with a global transformation of the temporal structure, the vectors representing ISIs at encoding (200, 400, 800 ms) and at retrieval (100, 200, 400 ms) do not differ in direction (the angle between vectors is 0°), but the length of the vector at encoding is 916.52 ms, and the length of the vector at retrieval is 458.26 ms. We calculated the magnitude of the global change as percent change = 100∗(vector length at encoding − vector length at retrieval)/vector length at encoding, and split data accordingly into trials with a small global change (spatial transformation: 25.41% ± .28%, corresponding to a mean location change of 1.28 dva); temporal transformation: 26.08% ± .21%, mean ISI change of 121.68 ms) and trials with a large global change (spatial transformation: 48.66% ± .26%, mean location change of 2.43 dva; temporal transformation: 49.60% ± .13%, mean ISI change of 231.47 ms). Note that this is essentially the same as dividing trials based on the predefined magnitude of configuration shrinkage or expansion by a factor of .25 versus .5 (see Stimuli and Apparatus).
The magnitude of global contextual changes, however, did not affect sensitivity (F(2, 38) = 1.06, p = .357, BF01 = 3.49), which was at the same level with small and large global changes (t(19) = 1.49, p = .077, BF01 = 1.67) and was not reduced with either small (t(19) = −1.23, p = .883, BF01 = 2.23) or large global changes: t(19) = −.20, p = .576, BF01 = 4.23) relative to the baseline condition without a change. In fact, even with large global changes, performance was numerically still slightly better than in the baseline condition.
Position changes that leave interitem relations and thus the relative context configuration intact do not seem to impair memory, indicating that absolute positions in space or time are not critical for spatial or temporal reference frames in visual working memory.
In Experiment 4, we applied partial ordinal transformations to the spatial or temporal structure: Unlike in Experiment 1, only two of the four items switched their spatial locations or temporal positions at retrieval. The probed item—that is, the item that changed its color in color-change trials—was either involved in this transformation (i.e., one of the two items that had switched positions) or not (i.e., one of the two items at the same locations or serial positions as at encoding). This experiment served two purposes. First, by comparing the transformation conditions against the baseline condition with intact spatial and temporal structure we were able to clarify if spatial and temporal reference frames are also sensitive to ordinal changes that are less pronounced and only affect a part of the context configuration. Second, comparing transformation conditions in which the probe item was involved versus not involved in the transformation allowed us to determine if memory for a specific item is only impaired when its own (ordinal) position is changed, or if it is generally sensitive to any change in its spatial or temporal reference frame—even if the item itself is not directly affected and a part of the context remains intact as well.
Unless stated otherwise, the methods of Experiment 4 were identical to those of Experiment 1.
Participants
Twenty volunteers (15 women, 5 men; mean age: 26 years; age range: 18–33 years) participated in the experiment. Two of them had already participated in Experiments 1 to 3, one only in Experiment 1, one only in Experiment 2, and two only in Experiment 3.
Apparatus and Stimuli
For item presentation at encoding, we used the same spatial and temporal structures as in Experiment 2. At retrieval, spatial and temporal structures were either the same as those at encoding (intact spatial and temporal structures), or we applied partial ordinal transformations to the spatial or temporal structures by switching the locations or serial positions of two of the four items relative to their positions in the memory array. All six pairwise item combination (A-B, B-C, C-D, A-C, A-D, B-D) were equally likely to be switched and randomly chosen on each trial. The probe item, for which a color change occurred in color-change trials, was chosen from either the item pair that was switched (probe item involved in transformation) or from the pair of items that remained at their original locations or serial positions, respectively (probe item not involved in transformation).
Procedure and Design
The experiment consisted of 864 trials, which were completed in two identical sessions on separate days. Trials were equally distributed among the transformation conditions (intact vs. spatial transformation vs. temporal transformation) and probe item conditions (involved vs. not involved in transformation). Transformation and probe item conditions were fully crossed. The condition with transformations of both spatial and temporal structures was dropped in favor of larger trial numbers in the remaining conditions.
Data Analysis
Using the same criteria as in Experiment 1, RT outliers (2.32% of all trials) were removed from the data. For the calculation of sensitivity, an equal number of trials without a color change was randomly assigned to the two probe-item conditions (which only affected color-change trials). Individual measures of sensitivity and mean reaction times were first submitted to repeated measures ANOVAs with the factor transformation condition (intact vs. spatial transformation vs. temporal transformation) to establish if partial ordinal transformations of the spatial or temporal structures generally affected memory performance. To determine if memory for a specific item was impaired by a transformation of any two items in the configuration or only when that item itself was transformed relative to its context, we first tested each of the four conditions with a transformed spatial or temporal structure (probe item involved and not involved) against the baseline condition (intact spatial and temporal structures) with one-tailed t-tests. We then compared the probe item conditions (involved vs. not involved) within transformation conditions (spatial vs. temporal) with two-tailed t-tests.
As the manipulation of whether or not the probe item was involved in the transformation only concerned color-change trials, we additionally analyzed accuracy in percent for these trials and again compared probe item conditions within each transformation condition.
Overall, even partial ordinal transformations affected the sensitivity to detect a color change (F(2, 38) = 6.93, p = .003, ηp
Given that there was no designated probe item in trials without a color change, a potentially stronger effect on memory when the tested item was involved in a spatial or temporal ordinal transformation might have been obscured by performance in the no-color-change trials, which were included in the main analyses. Therefore, we additionally computed accuracy for color-change trials only. As in the main analyses, however, there was no difference in performance for probe items that were not involved and performance for probe items that were involved in the ordinal transformation of either the spatial (not involved: 72.24% ±2.64%; involved: 71.97% ±2.70%; t(19) = .14, p = .891, BF01 = 4.27) or the temporal structure (not involved: 70.22% ±2.75%; involved: 70.68% ±2.70%; t(19) = .25, p = .806, BF01 = 4.19). It should be noted, however, that the detrimental effects of spatial or temporal transformations turned out to be primarily driven by an increase in false alarms in trials without a color change rather than by an increase in misses in trials with a color change (Figure S2 in the online supplemental materials). Thus, it comes as no surprise that performance did not depend on whether the probe item in color-change trials was or was not involved in the transformation, as performance in these trials was hardly affected by the transformations to begin with.
Based on these findings, we can conclude that ordinal transformations substantially interfere with memory even when they involve only a part of the context configuration and irrespective of whether the probed item is affected by this transformation or not. Remarkably, memory for a surface feature of a given item depends, to a certain degree, on the integrity of the item’s task-irrelevant spatiotemporal context and thus on other items’ spatial or temporal positions.
In Experiment 5, we applied partial relational transformations to the spatial or temporal structure at retrieval, following the same logic as in Experiment 4. For transformations of the spatial configuration, one item changed its position, which affected two of the four relative interitem distances between neighboring items (e.g., the relative spatial distances between items A to D were C–D < D–A < B–C < A–B at encoding; at retrieval, item B was presented in a new position, changing the relative interitem relations to C–D < A–B < B–C < D–A); for transformations of the temporal configuration, ISIs were changed, likewise affecting the relative temporal distances between items (e.g., relative temporal distances between items changed from ISI1 < ISI3 < ISI2 at encoding to ISI3 < ISI1 < ISI2 at retrieval). The probed item, whose color changed in color-change trials, was either involved in this transformation or not directly affected.
Unless stated otherwise, the methods of Experiment 5 were identical to those of Experiment 1.
Participants
Twenty volunteers (12 women, 8 men; mean age: 26 years; age range: 20–34 years) participated in the experiment. One of them had already participated in Experiments 1 to 4, one in Experiments 1 and 4, one in Experiments 2 and 4, one in Experiments 3 and 4, three only in Experiment 1, one only in Experiment 3, and two only in Experiment 4.
Apparatus and Stimuli
The same six spatial and temporal structures as in Experiments 2 and 4 were used for item presentation at encoding. For partial relational transformations of the spatial structure, one of the items was presented at a different location (but still in the same quadrant) at retrieval, changing the relative interitem distances to its two neighboring items in clockwise and counterclockwise direction. For each spatial configuration, each item was equally likely to change position. The probe item was either the item that changed position (probe item involved in transformation) or the item in the opposite quadrant, whose relative distances to its neighboring items were not affected by the transformation (probe item not involved in transformation). Temporal structures were partially transformed in an analogous manner: One item shifted its temporal position relative to the other items. If this was the first or last item in the sequence, one ISI was changed (e.g., the ISI between the first and second items); if this was the second or third item in the sequence, the two surrounding ISIs were changed. The remaining ISI(s) was/were the same as at encoding. Each item was equally likely to change its relative temporal position. ISIs changed to one of the ISIs not part of the temporal structure in that trial—i.e., when a permutation of 100, 300, and 600 ms was used at encoding, ISIs changed to 200, 400, or 800 ms and vice versa. The probe item was either one of the items whose relative temporal distances (i.e., ISIs) to the preceding and/or succeeding items was changed (probe item involved in transformation) or one of the items whose relative temporal distances to its temporal neighbors were the same (probe item not involved in transformation).
Procedure and Design
The experimental design was analogous to that of Experiment 4: A total of 864 trials were equally divided among transformation conditions (intact vs. spatial transformation vs. temporal transformation) and probe item conditions (probe item involved vs. not involved in transformation), and completed in two sessions on separate days.
Data Analysis
Using the same criteria as in Experiment 1, RT outliers (2.49% of all trials) were removed from the data. As in Experiment 4, we first conducted repeated measures ANOVAs with the factor transformation condition (intact vs. spatial transformation vs. temporal transformation) to determine if partial relational transformations of either the spatial or temporal structure interfered with memory. We then tested each of the four transformation conditions (Spatial vs. Temporal × Probe Item Involved vs. Not Involved) against the baseline condition with intact spatial and temporal structures (one-tailed t-tests) to clarify if memory was reduced in each of these cases. Finally, we compared probe item conditions within transformation conditions (two-tailed t-tests).
Overall, partial relational transformations did not significantly reduce sensitivity (F(2, 38) = 1.13, p = .332, BF01: 3.29;
The effects of relational transformations were already rather small when they affected the entire spatial or temporal configuration (see also Experiment 2 and General Discussion). However, the costs associated with relational changes appear to scale with the magnitude of these changes (
The visual objects or events we encounter in natural and thus often dynamic settings can be differentiated based on their spatial or temporal properties. With this study, we sought to clarify specifically which spatiotemporal properties are incidentally encoded along with nonspatiotemporal features to form reference frames in visual working memory. To this end, we applied different types of transformations—ordinal, relational, and global—to the task-irrelevant spatial and/or temporal structures of item presentation at retrieval, reasoning that memory performance should only be impaired if the internal reference frame in visual working memory relies on the spatial or temporal properties that are affected by the respective transformation (ordinal, relative or absolute position in space or time). Relative to a no-transformation condition, memory decrements were observed when spatial or temporal item positions changed in a manner that affected either their ordinal position in a sequence (ordinal transformation) or their relative distances to the other items (relational transformation). By contrast, global transformations of spatial or temporal structures, which involved changes in the absolute item positions but left interitem relations intact by shrinking or expanding the entire configuration, did not interfere with memory.
For the spatial domain, these results corroborate previous reports that memory for object identity depends on the object’s location relative to other items (e.g.,
Overall, the costs associated with relational changes of spatial or temporal structures were rather small and less consistent than the costs associated with ordinal changes, for both full as well as partial transformations, so it is tempting to conclude that ordinal position is more important than the relative distances (in certain directions) between items. One must keep in mind, however, that relational transformations naturally involve more degrees of freedom than ordinal transformations: They can be more or less pronounced, which appears to affect the extent to which these changes interfere with memory (see
As global transformations can also be more or less pronounced, one might argue that we failed to observe any effects of global transformations not because absolute position information is generally not critical for spatial or temporal reference frames, but because the global changes were not extensive enough. According to this line of reasoning, the overall pattern of results across experiments could be taken to reflect a gradient of change magnitudes induced by the transformations and thus of associated memory impairments rather than qualitative differences in the type of transformations that spatial and temporal reference frames are sensitive to. However, we consider this to be an unlikely scenario for a couple of reasons. First, there was no indication that memory performance depends on the magnitude of global changes in spatiotemporal context (unlike what we observed for relational changes,
While our primary goal was to establish the metrical properties of spatial and temporal reference frames, the present experiments also lend support to two related ideas that we and others have recently put forward. First, they confirm that not only spatial (e.g.,
Thus, the incidental scaffolding of objects by their spatial and temporal contexts seems to provide reference frames that mediate feature binding and facilitate retrieval; in their function, space and time can “stand in” for each other.
An issue that remains somewhat unresolved at this point is for which function(s) specifically space and time are equivalent. That is, which processes or mechanisms of visual working memory are supported by spatial and temporal reference frames (and thus disrupted by transformations of either the spatial or temporal structure). As manipulating the availability or integrity of spatial or temporal structures at test has repeatedly been shown to impair memory (e.g.,
In change-detection tasks, spatiotemporal context might be also utilized to establish correspondence between the sample and test displays. Transformations of spatial or temporal relations would accordingly disrupt this matching process and impair performance. More specifically, one might predict that the mismatch between spatiotemporal context at retrieval versus in memory (i.e., the reference frame as defined by ordinal and relational properties) would primarily increase the proportion of false alarms (rather than misses), because the mismatch signal in same-color trials leads participants to mistake the irrelevant change for a relevant one. An increase in false alarms is indeed what we observed in the experiments with ordinal and relational transformations (Figure S2 in the online supplemental materials).
It is entirely conceivable that spatiotemporal context does not only support retrieval but also earlier stages of visual working memory processing, for example the maintenance stage by facilitating processes such as individuation or attentional refreshing. The specific function fulfilled by the representation of memory contents within their spatiotemporal context might even take slightly different forms, depending on the respective task demands (e.g., establishing correspondence for change detection or ensuring reliable access for report), and become manifest in different behavioral signatures (e.g., an increase in false alarm rate as in the present study, or specific patterns of error correlations as in
Another important avenue for future research will be to identify principles that govern the formation of spatiotemporal reference frames under more natural conditions. Everyday visual scenes are markedly different from the simple visual arrays that have previously been used to study spatial or temporal reference frames in visual working memory. For example, they typically contain a variety of objects that are entirely irrelevant for our current goals; in the present and in previous studies, by contrast, space and time constituted task-irrelevant feature dimensions of task-relevant objects. While we know that task-irrelevant objects can be filtered out (more or less successfully; e.g.,
To conclude, we have shown that spatial and temporal reference frames in visual working memory are defined by interitem relations—as defined by both the ordinal position of items in a spatial or temporal sequence as well as the relative distances between items in space or time—rather than by bindings between items and their absolute positions. The encoding of objects within their spatiotemporal context appears to be a largely automatic process, which occurs even when spatiotemporal configurations are irrelevant and unreliable. By revealing that spatial and temporal reference frames share the same metrical properties, our results further complement recent findings indicating that time serves a similar function as space for visual working memory.
Allen, M., Poggiali, D., Whitaker, K., Marshall, T. R., & Kievit, R. A. (2019). Raincloud plots: A multi-platform tool for robust data visualization. Wellcome Open Research, 4, 63. 10.12688/wellcomeopenres.15191.1
Boduroglu, A., & Shah, P. (2009). Effects of spatial configurations on visual change detection: An account of bias changes. Memory & Cognition, 37(8), 1120–1131. 10.3758/MC.37.8.1120
Boduroglu, A., & Shah, P. (2014). Configural representations in spatial working memory. Visual Cognition, 22(1), 102–124. 10.1080/13506285.2013.875499
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10(4), 433–436. 10.1163/156856897X00357
Cai, Y., Sheldon, A. D., Yu, Q., & Postle, B. R. (2019). Overlapping and distinct contributions of stimulus location and of spatial context to nonspatial visual short-term memory. Journal of Neurophysiology, 121(4), 1222–1231. 10.1152/jn.00062.2019
Chen, H., & Wyble, B. (2015). The location but not the attributes of visual cues are automatically encoded into working memory. Vision Research, 107, 76–85. 10.1016/j.visres.2014.11.010
Cousineau, D. (2005). Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson’s method. Tutorials in Quantitative Methods for Psychology, 1(1), 42–45. 10.20982/tqmp.01.1.p042
Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G∗Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. 10.3758/BF03193146
Foster, J. J., Bsales, E. M., Jaffe, R. J., & Awh, E. (2017). Alpha-band activity reveals spontaneous representations of spatial position in visual working memory. Current Biology, 27(20), 3216–3223.e6. 10.1016/j.cub.2017.09.031
Gazzaley, A., & Nobre, A. C. (2012). Top-down modulation: Bridging selective attention and working memory. Trends in Cognitive Sciences, 16(2), 129–135. 10.1016/j.tics.2011.11.014
Griffin, I. C., & Nobre, A. C. (2003). Orienting attention to locations in internal representations. Journal of Cognitive Neuroscience, 15(8), 1176–1194. 10.1162/089892903322598139
Heuer, A., Crawford, J. D., & Schubö, A. (2017). Action relevance induces an attentional weighting of representations in visual working memory. Memory & Cognition, 45(3), 413–427. 10.3758/s13421-016-0670-3
Heuer, A., Ohl, S., & Rolfs, M. (2020). Memory for action: A functional view of selection in visual working memory. Visual Cognition, 28(5-8), 388–400. 10.1080/13506285.2020.1764156
Heuer, A., & Rolfs, M. (2021). Incidental encoding of visual information in temporal reference frames in working memory. Cognition, 207, 104526. 10.1016/j.cognition.2020.104526
Heuer, A., & Rolfs, M. (2022). A direct comparison of attentional orienting to spatial and temporal positions in visual working memory. Psychonomic Bulletin & Review, 29, 182–190. 10.3758/s13423-021-01972-3
Heuer, A., & Schubö, A. (2018). Separate and combined effects of action relevance and motivational value on visual working memory. Journal of Vision, 18(5), 14.
Heuer, A., Schubö, A., & Crawford, J. D. (2016). Different cortical mechanisms for spatial vs. feature-based attentional selection in visual working memory. Frontiers in Human Neuroscience, 10, 415. 10.3389/fnhum.2016.00415
Hollingworth, A. (2006). Scene and position specificity in visual memory for objects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(1), 58–69. 10.1037/0278-7393.32.1.58
Hollingworth, A. (2007). Object-position binding in visual memory for natural scenes and object arrays. Journal of Experimental Psychology: Human Perception and Performance, 33(1), 31–47. 10.1037/0096-1523.33.1.31
JASP Team. (2020). JASP (Version 0.9.1) [Computer software].
Jiang, Y., Olson, I. R., & Chun, M. M. (2000). Organization of visual short-term memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26(3), 683–702. 10.1037/0278-7393.26.3.683
Jost, K., Bryck, R. L., Vogel, E. K., & Mayr, U. (2011). Are old adults just like low working memory young adults? Filtering efficiency and age differences in visual working memory. Cerebral Cortex, 21(5), 1147–1154. 10.1093/cercor/bhq185
Kleiner, M., Brainard, D. H., & Pelli, D. G. (2007). What’s new in Psychtoolbox-3. Perception, 36(14), 1–16. 10.1068/v070821
Logie, R. H., Brockmole, J. R., & Jaswal, S. (2011). Feature binding in visual short-term memory is unaffected by task-irrelevant changes of location, shape, and color. Memory & Cognition, 39(1), 24–36. 10.3758/s13421-010-0001-z
Manohar, S. G., Pertzov, Y., & Husain, M. (2017). Short-term memory for spatial, sequential and duration information. Current Opinion in Behavioral Sciences, 17, 20–26. 10.1016/j.cobeha.2017.05.023
Morey, R. D. (2008). Confidence intervals from normalized data: A correction to Cousineau (2005). Tutorials in Quantitative Methods for Psychology, 4(2), 61–64. 10.20982/tqmp.04.2.p061
Oberauer, K., & Lin, H. Y. (2017). An interference model of visual working memory. Psychological Review, 124(1), 21–59. 10.1037/rev0000044
Ohl, S., & Rolfs, M. (2017). Saccadic eye movements impose a natural bottleneck on visual short-term memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(5), 736–748. 10.1037/xlm0000338
Ohl, S., & Rolfs, M. (2020). Bold moves: Inevitable saccadic selection in visual short-term memory. Journal of Vision, 20(2), 11.
Ohl, S., & Rolfs, M. (2018). Saccadic selection of stabilized items in visuospatial working memory. Consciousness and Cognition, 64, 32–44. 10.1016/j.concog.2018.06.016
Olson, I. R., & Marshuetz, C. (2005). Remembering “what” brings along “where” in visual working memory. Perception & Psychophysics, 67(2), 185–194. 10.3758/BF03206483
Papenmeier, F., Huff, M., & Schwan, S. (2012). Representation of dynamic spatial configurations in visual short-term memory. Attention, Perception, & Psychophysics, 74(2), 397–415. 10.3758/s13414-011-0242-3
Pertzov, Y., & Husain, M. (2014). The privileged role of location in visual working memory. Attention, Perception, & Psychophysics, 76(7), 1914–1924. 10.3758/s13414-013-0541-y
Rerko, L., Oberauer, K., & Lin, H.-Y. (2014). Spatial transposition gradients in visual working memory. Quarterly Journal of Experimental Psychology, 67(1), 3–15. 10.1080/17470218.2013.789543
Rondina, R., Curtiss, K., Meltzer, J. A., Barense, M. D., & Ryan, J. D. (2017). The organisation of spatial and temporal relations in memory. Memory, 25(4), 436–439. 10.1080/09658211.2016.1182553
Ryan, J. D., & Villate, C. (2009). Building visual representations: The binding of relative spatial relations across time. Visual Cognition, 17(1–2), 254–272. 10.1080/13506280802336362
Sapkota, R. P., Pardhan, S., & van der Linde, I. (2016). Spatiotemporal proximity effects in visual short-term memory examined by target-nontarget analysis. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(8), 1304–1315. 10.1037/xlm0000238
Schneegans, S., & Bays, P. M. (2017). Neural architecture for feature binding in visual working memory. The Journal of Neuroscience, 37(14), 3913–3925. 10.1523/JNEUROSCI.3493-16.2017
Schneegans, S., & Bays, P. M. (2019). New perspectives on binding in visual working memory. British Journal of Psychology, 110(2), 207–244. 10.1111/bjop.12345
Schneegans, S., Harrison, W. J., & Bays, P. M. (2021). Location-independent feature binding in visual working memory for sequentially presented objects. Attention, Perception, & Psychophysics, 83(6), 2377–2393. 10.3758/s13414-021-02245-w
Schneegans, S., McMaster, J. M. V., & Bays, P. M. (2022). Role of time in binding features in visual working memory. Psychological Review. Advance online publication.10.1037/rev0000331
Snow, J. C., & Culham, J. C. (2021). The treachery of images: How realism influences brain and behavior. Trends in Cognitive Sciences, 25(6), 506–519. 10.1016/j.tics.2021.02.008
Souza, A. S., & Oberauer, K. (2016). In search of the focus of attention in working memory: 13 years of the retro-cue effect. Attention, Perception, & Psychophysics, 78(7), 1839–1860. 10.3758/s13414-016-1108-5
Stanislaw, H., & Todorov, N. (1999). Calculation of signal detection theory measures. Behavior Research Methods, Instruments, & Computers, 31(1), 137–149. 10.3758/BF03207704
Sun, H. M., & Gordon, R. D. (2009). The effect of spatial and nonspatial contextual information on visual object memory. Visual Cognition, 17(8), 1259–1270. 10.1080/13506280802469510
Sun, H. M., & Gordon, R. D. (2010). The influence of location and visual features on visual object memory. Memory & Cognition, 38(8), 1049–1057. 10.3758/MC.38.8.1049
Timm, J. D., & Papenmeier, F. (2019a). Reorganization of spatial configurations in visual working memory: A matter of set size?PLoS ONE, 14(11), e0225068. 10.1371/journal.pone.0225068
Timm, J. D., & Papenmeier, F. (2019b). Reorganization of spatial configurations in visual working memory. Memory & Cognition, 47(8), 1469–1480. 10.3758/s13421-019-00944-2
Timm, J. D., & Papenmeier, F. (2020). (Re-)organisation of spatial configurations in visual working memory: The fate of objects rendered relevant or irrelevant by selective attention. Quarterly Journal of Experimental Psychology, 73(12), 2246–2259. 10.1177/1747021820951130
Treisman, A., & Zhang, W. (2006). Location and binding in visual working memory. Memory & Cognition, 34(8), 1704–1719. 10.3758/BF03195932
Tulving, E. (1974). Cue-dependent forgetting. American Scientist, 62, 74–82.
Tulving, E., & Thomson, D. M. (1973). Encoding specificity and retrieval processes in episodic memory. Psychological Review, 80(5), 352–373. 10.1037/h0020071
Udale, R., Farrell, S., & Kent, C. (2017). No evidence for binding of items to task-irrelevant backgrounds in visual working memory. Memory & Cognition, 45(7), 1144–1159. 10.3758/s13421-017-0727-y
Udale, R., Farrell, S., & Kent, C. (2018). No evidence of binding items to spatial configuration representations in visual working memory. Memory & Cognition, 46(6), 955–968. 10.3758/s13421-018-0814-8
van Ede, F., Chekroud, S. R., & Nobre, A. C. (2019). Human gaze tracks attentional focusing in memorized visual space. Nature Human Behaviour, 3(5), 462–470. 10.1038/s41562-019-0549-y
Vogel, E. K., McCollough, A. W., & Machizawa, M. G. (2005). Neural measures reveal individual differences in controlling access to working memory. Nature, 438(7067), 500–503. 10.1038/nature04171
Wagenaar, W. A. (1969). Note on the construction of digram-balanced Latin squares. Psychological Bulletin, 72(6), 384–386. 10.1037/h0028329
Woodman, G. F., Vogel, E. K., & Luck, S. J. (2012). Flexibility in visual working memory: Accurate change detection in the face of irrelevant variations in position. Visual Cognition, 20(1), 1–28. 10.1080/13506285.2011.630694
Submitted: February 12, 2022 Revised: June 14, 2022 Accepted: June 25, 2022