Description
Sarah Hawkins, Richard Ogden and Ian Cross 'Joining in' spontaneous conversation and improvisational music-making Starting from the perspective that human spoken and musical interaction involves jointly managed performance, we are collecting audio-visual data of dyads while they talk spontaneously, improvise music, largely on unfamiliar instruments, and engage in similar types of cooperative play with non-musical toys and games, and in cooperative story-telling. The immediate aim is to establish effective ways to acquire a corpus of quasi-natural data of dyads interacting in speech and in music-making, with particular focus on what happens when someone 'joins in' ongoing behaviour in each modality. We seek parallels between conversational and musical interaction by analysing the data in terms of categories and procedures established in the literature on the phonetics of conversational speech, with additional close interest in hand gestures, aspects of facial expression, and body movement. Likewise, we will examine the extent to which musical features common in 'participatory' musics are evident in all these interactions: these include repetition, complementary behaviours and variation. We emphasize functional rather than formal relationships between units of theoretical analysis, and focus on sequences of behaviour analysed relative to one another; thus our units of theoretical analysis are seen as resources for the accomplishment of joint action rather than as purely formal elements, and we focus on how activities are jointly organized and managed. Many of these parameters, already identified as governing conversational structure, can be seen as 'musical': relationships between timing, pitch, loudness and timbre of the current talker's contribution relative to what has already been said, or is being simultaneously said. By examining the same parameters across the various musical and non-musical instances of 'joining in', each classified in terms of 'alignment' and 'disalignment' with a prior action (which roughly corresponds to cooperative vs uncooperative joining in) we hope to take the first steps towards identifying a common framework for understanding what underpins successful human musical and linguistic interaction.Period | 9 Nov 2012 |
---|---|
Event type | Conference |
Location | LondonShow on map |
Documents & Links
Related content
-
Projects
-
Temporal co-ordination in talk-in-interaction
Project: Research project (funded) › Research