Synchronous Linguistic and Visual Processing (SynProc)
Synchronous Linguistic and Visual Processing
Start date: Sep 1, 2008,
End date: Aug 31, 2014
When humans process language, they rarely do so in isolation. Linguistic input often occurs synchronously with visual input, e.g., in everyday activities such as attending a lecture or following directions on a map. The visual context constrains the interpretation of the linguistic input, and vice versa, making processing more efficient and less ambiguous. Given the ubiquity of synchronous linguistic and visual processing, it is surprising that there is only a sparse experimental literature that deals with this topic, while virtually no computational models exist that capture the synchronous interpretation process. We propose an experimental research program that will investigate key features of synchronous processing by tracking participants' eye movements when they view a naturalistic scene and listen to a speech stimulus at the same time. The aim is to understand synchronous processing better by studying the interaction of saliency and ambiguity, and the role of incrementality, object context, and task factors. These experimental results will feed into a series of computational models that predict the eye-movement patterns that humans exhibit when they view a scene and listen to speech at the same time. The key modeling idea is to treat synchronous processing as an alignment problem, for which a rich literature exists in computational linguistics. Building on this literature, we will develop models that incrementally construct aligned linguistic and visual representations, and that can be evaluated against eye-tracking data.
Get Access to the 1st Network for European Cooperation