The current project uses a multi-method approach - based on magnetoencephalography (MEG), transcranial magnetic stimulation (TMS) and functional neuroimaging (fMRI) - to study the neural basis of semantic cognition (i.e., the use of conceptual knowledge to guide thought and behaviour). We focus on two highly-interactive systems that underpin (1) the computation of word meaning (semantic representation) and (2) the task-appropriate selection and retrieval of semantic knowledge (semantic control). Since both components rely on sets of interconnected brain regions, a systems approach is required to understand how semantic cognition emerges from the interactive activation of these areas. Evidence from neuroimaging and patient data suggests that semantic representations encompass modality-specific knowledge, encoding sensory and motor properties, and the convergence of this information in an amodal semantic "hub". Nevertheless, relatively little is known about how these regions interact, and when and where activation is essential to the recovery of meaning. We will use MEG to explore how activation propagates between these components and TMS to confirm which activation peaks are essential to a range of semantic tasks. Further, we will specify the organisation of the semantic control network by probing its response to different forms of control (i.e., bottom-up vs. top-down). For the first time, we will be able to characterise the underlying temporal patterns, showing that bottom-up control emerges as short-lived neural activity (locked to the stimulus) while strategic top-down control is reflected in sustained neural activity. Finally, we will investigate the interaction of the representation and control network, as it is assumed that conceptual knowledge is shaped to fit the constraints of the current task/context.
This project investigates the biological basis of semantic cognition, which refers to our ability to (i) assign meaning to everything we see, hear, read, smell and taste, and (ii) use this knowledge in a way that is appropriate to the task or context. This underpins our ability to communicate with people, recognise signs, gestures, faces etc, and use objects in an appropriate way. Because of its relevance for everyday reasoning and behaviour, substantial research effort has been directed towards delineating the brain areas that support semantic cognition. Previous work has shown that semantic representation and control processes are not localised to a single brain area but rely on a set of distributed, interconnected regions. Despite this progress, little is currently known about how meaning retrieval emerges from interactions between these distributed brain areas. This important question is the focus of our project.
The project takes advantage of recent advances within neuroscience methodology, draws together a group of investigators with expertise in multiple methods, and builds on recent advances in our understanding of semantic representation and control, as well as behavioural methods that tap these components. We make use of three complementary neuroscience techniques, each with unique strengths. Time-sensitive magnetoencephalography (MEG) recordings will be used to capture the spread of activation across the network, which allow us to infer patterns of communication between brain areas. We will then disrupt neural processing at specific time points and locations (via transcranial magnetic stimulation; TMS) to validate the MEG results. To increase spatial accuracy in both analyses, we will also acquire high-resolution images from functional neuroimaging (fMRI).
In the first part of our proposal, we use temporal information from MEG to define how long it takes for meaning to emerge (e.g., after a word has been presented) and what kind of knowledge is necessary. Recent MEG findings suggest that semantic processing involves a rapid forward sweep of activation from sensory to amodal semantic areas. When we access concepts, we also activate sensory-specific information (e.g., information about what the concept looks like and how it moves); however, we currently have little knowledge of when this sensory-specific information is integrated with amodal knowledge, in order to activate the full concept. We also use TMS to disrupt the function of specific brain areas at particular points in time, to establish when sites within the network make a necessary contribution to semantic cognition.
The second aim of our study is to explore the neural network underpinning semantic control - i.e., the mechanisms that allow us to focus on relevant aspects of knowledge and ignore other information. Our previous findings suggest that activation in the semantic network is highly flexible and task-specific and that different aspects of semantic control may involve different subsets of brain areas (e.g., tonic top-down influences vs. stimulus-driven bottom-up control). With MEG, we can gain new insights into these control processes by describing the order in which activity flows between sites and how the timing/duration of activation differs across semantic control manipulations. Moreover, we can assess, for the first time, the impact of semantic control demands on brain areas that store semantic knowledge: amodal concepts may be activated later or for longer in control-demanding situations, and sensory-specific regions may be strategically recruited by specific tasks.
In addition to advancing biological models of semantic cognition, our project has broader implications. It will demonstrate the power of combined MEG/fMRI/TMS studies for advancing our knowledge of the biological basis of cognitive functions. In the future, it could augment our understanding, diagnosis and treatment of patients with disorders of semantic cognition.