Authors
Gwendolyn Rehrig, Candace E. Peacock, Taylor R. Hayes, John M. Henderson, Fernanda Ferreira
Publication date
2020
Journal
Journal of Experimental Psychology: Learning, Memory and Cognition
Volume
46
Issue
9
Pages
1659-1681
Description
The world is visually complex, yet we can efficiently describe it by extracting the information that is most relevant to convey. How do the properties of real-world scenes help us decide where to look and what to say? Image salience has been the dominant explanation for what drives visual attention and production as we describe displays, but new evidence shows scene meaning predicts attention better than image salience. Here we investigated the relevance of one aspect of meaning, graspability (the grasping interactions objects in the scene afford), given that affordances have been implicated in both visual and linguistic processing. We quantified image salience, meaning, and graspability for real-world scenes. In 3 eyetracking experiments, native English speakers described possible actions that could be carried out in a scene. We hypothesized that graspability would preferentially guide attention due to its task …
Total citations
20202021202220232024261162
Scholar articles
G Rehrig, CE Peacock, TR Hayes, JM Henderson… - Journal of Experimental Psychology: Learning, Memory …, 2020