Show simple item record

dc.contributor.authorExternal author(s) only
dc.identifier.citationAndrea Bocincovaa , Christian N. L. Oliversb , Mark G. Stokesc and Sanjay G. Manohar. A common neural network architecture for visual search and working memory. Visual cognition, September 2020.en
dc.descriptionOpen accessen
dc.description.abstractVisual search and working memory (WM) are tightly linked cognitive processes. Theories of attentional selection assume that WM plays an important role in top-down guided visual search. However, computational models of visual search do not model WM. Here we show that an existing model of WM can utilize its mechanisms of rapid plasticity and pattern completion to perform visual search. In this model, a search template, like a memory item, is encoded into the network’s synaptic weights forming a momentary stable attractor. During search, recurrent activation between the template and visual inputs amplifies the target and suppresses nonmatching features via mutual inhibition. While the model cannot outperform models designed specifically for search, it can, “off-the-shelf”, account for important characteristics. Notably, it produces search display set-size costs, repetition effects, and multiple-template search effects, qualitatively in line with empirical data. It is also informative that the model fails to produce some important aspects of visual search behaviour, such as suppression of repeated distractors. Also, without additional control structures for top-down guidance, the model lacks the ability to differentiate between encoding and searching for targets. The shared architecture bridges theories of visual search and visual WM, highlighting their common structure and their differences.en
dc.description.sponsorshipSupported by the NIHRen
dc.titleA common neural network architecture for visual search and working memoryen

Files in this item


There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record