A video system for recognizing gestures by artificial neural networks for expressive musical control

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Standard

A video system for recognizing gestures by artificial neural networks for expressive musical control. / Modler, P; Myatt, T.

GESTURE-BASED COMMUNICATION IN HUMAN-COMPUTER INTERACTION. ed. / A Camurri; G Volpe. BERLIN : SPRINGER-VERLAG BERLIN, 2003. p. 541-548.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Harvard

Modler, P & Myatt, T 2003, A video system for recognizing gestures by artificial neural networks for expressive musical control. in A Camurri & G Volpe (eds), GESTURE-BASED COMMUNICATION IN HUMAN-COMPUTER INTERACTION. SPRINGER-VERLAG BERLIN, BERLIN, pp. 541-548, 5th International Workshop on Gesture-Based Communication in Human-Computer Interaction, Genova, 15/04/03.

APA

Modler, P., & Myatt, T. (2003). A video system for recognizing gestures by artificial neural networks for expressive musical control. In A. Camurri, & G. Volpe (Eds.), GESTURE-BASED COMMUNICATION IN HUMAN-COMPUTER INTERACTION (pp. 541-548). SPRINGER-VERLAG BERLIN.

Vancouver

Modler P, Myatt T. A video system for recognizing gestures by artificial neural networks for expressive musical control. In Camurri A, Volpe G, editors, GESTURE-BASED COMMUNICATION IN HUMAN-COMPUTER INTERACTION. BERLIN: SPRINGER-VERLAG BERLIN. 2003. p. 541-548

Author

Modler, P ; Myatt, T. / A video system for recognizing gestures by artificial neural networks for expressive musical control. GESTURE-BASED COMMUNICATION IN HUMAN-COMPUTER INTERACTION. editor / A Camurri ; G Volpe. BERLIN : SPRINGER-VERLAG BERLIN, 2003. pp. 541-548

Bibtex - Download

@inproceedings{f71f7b5cdfee42cc81883de512c9f9de,
title = "A video system for recognizing gestures by artificial neural networks for expressive musical control",
abstract = "In this paper we describe a system to recognize gestures to control musical processes. For that we applied a Time Delay Neuronal Network to match gestures processed as variation of luminance information in video streams. This resulted in recognition rates of about 90% for 3 different types of hand gestures and it is presented here as a prototype for a gestural recognition system that is tolerant to ambient conditions and environments. The neural network can be trained to recognize gestures difficult to be described by postures or sign language. This can be used to adapt to unique gestures of a performer or video sequences of arbitrary moving objects. We will discuss the outcome of extending the system to learn successfully a set of 17 hand gestures. The application was implemented in jMax to achieve real-time conditions and easy integration into a musical environment. We will describe the design and learning procedure of the using the Stuttgart Neuronal Network Simulator. The system aims to integrate into an environment that enables expressive control of musical parameters (KANSEI).",
author = "P Modler and T Myatt",
year = "2003",
language = "English",
isbn = "3-540-21072-5",
pages = "541--548",
editor = "A Camurri and G Volpe",
booktitle = "GESTURE-BASED COMMUNICATION IN HUMAN-COMPUTER INTERACTION",
publisher = "SPRINGER-VERLAG BERLIN",
note = "5th International Workshop on Gesture-Based Communication in Human-Computer Interaction ; Conference date: 15-04-2003 Through 17-04-2003",

}

RIS (suitable for import to EndNote) - Download

TY - GEN

T1 - A video system for recognizing gestures by artificial neural networks for expressive musical control

AU - Modler, P

AU - Myatt, T

PY - 2003

Y1 - 2003

N2 - In this paper we describe a system to recognize gestures to control musical processes. For that we applied a Time Delay Neuronal Network to match gestures processed as variation of luminance information in video streams. This resulted in recognition rates of about 90% for 3 different types of hand gestures and it is presented here as a prototype for a gestural recognition system that is tolerant to ambient conditions and environments. The neural network can be trained to recognize gestures difficult to be described by postures or sign language. This can be used to adapt to unique gestures of a performer or video sequences of arbitrary moving objects. We will discuss the outcome of extending the system to learn successfully a set of 17 hand gestures. The application was implemented in jMax to achieve real-time conditions and easy integration into a musical environment. We will describe the design and learning procedure of the using the Stuttgart Neuronal Network Simulator. The system aims to integrate into an environment that enables expressive control of musical parameters (KANSEI).

AB - In this paper we describe a system to recognize gestures to control musical processes. For that we applied a Time Delay Neuronal Network to match gestures processed as variation of luminance information in video streams. This resulted in recognition rates of about 90% for 3 different types of hand gestures and it is presented here as a prototype for a gestural recognition system that is tolerant to ambient conditions and environments. The neural network can be trained to recognize gestures difficult to be described by postures or sign language. This can be used to adapt to unique gestures of a performer or video sequences of arbitrary moving objects. We will discuss the outcome of extending the system to learn successfully a set of 17 hand gestures. The application was implemented in jMax to achieve real-time conditions and easy integration into a musical environment. We will describe the design and learning procedure of the using the Stuttgart Neuronal Network Simulator. The system aims to integrate into an environment that enables expressive control of musical parameters (KANSEI).

M3 - Conference contribution

SN - 3-540-21072-5

SP - 541

EP - 548

BT - GESTURE-BASED COMMUNICATION IN HUMAN-COMPUTER INTERACTION

A2 - Camurri, A

A2 - Volpe, G

PB - SPRINGER-VERLAG BERLIN

CY - BERLIN

T2 - 5th International Workshop on Gesture-Based Communication in Human-Computer Interaction

Y2 - 15 April 2003 through 17 April 2003

ER -