Blending the Physical and the Virtual in Music Technology: From Interface Design to Multi-modal Signal Processing


Recent years have seen a significant increase of interest in rich multi-modal user interfaces going beyond conventional mouse/keyboard/screen interaction. The new interface technologies are broadly impacting music technology and culture. New musical interfaces use a variety of sensing (and actuating) modalities to receive and present information to users, and often require techniques from signal processing and machine learning in order to extract and fuse high level information from noisy, high dimensional signals over time. Hence they pose many interesting signal processing challenges while offering fascinating possibilities for new research. At the same time the richness of possibilities for new forms of musical interaction requires a new approach to the design of musical technologies and has implications for performance aesthetics and music pedagogy. This tutorial begins with a general and gentle introduction to the theory and practice of the design of new technologies for musical creation and performance. It continues with an overview of signal processing and machine learning methods which are needed for more advanced work in new musical interface design.


Dr. George Tzanetakis is an Associate Professor in the Department of Computer Science with cross-listed appointments in ECE and Music at the University of Victoria, Canada. He is Canada Research Chair (Tier II) in the Computer Analysis and Audio and Music and received the Craigdaroch research award in artistic expression at the University of Victoria in 2012. In 2011 he was Visiting Faculty at Google Research. He received his
PhD in Computer Science at Princeton University in 2002 and was a Post-Doctoral fellow
at Carnegie Mellon University in 2002-2003. His research spans all stages of audio content  analysis such as feature extraction,segmentation, classification with specific emphasis on music information retrieval. He has given several tutorials in well known international conferences such as ICASSP, ACM Multimedia and ISMIR. More recently he has been exploring new interfaces for musical expression, music robotics, computational ethnomusicology, and computer-assisted music instrument tutoring.

Dr. Sidney Fels, Professor, Electrical & Computer Engineering. Ph.D., Toronto. Sid has worked in HCI, neural networks, intelligent agents and interactive arts for over ten years. He was visiting researcher at ATR Media Integration & Communications Research Laboratories (1996/7). His multimedia interactive artwork, the Iamascope, was exhibited world-wide. He worked at Virtual Technologies Inc. in Palo Alto developing the GesturePlusTM system and the CyberServerTM in 1995. Sid created Glove-TalkII that maps hand gestures to speech. He was co-chair of Graphics Interface 2000; chair of alt.chi at CHI’05, co-chair of ICEC’11 and co-chair of Interactivity at CHI’11. He leads the Human Communications Technology Laboratory and is Director of the Media and Graphics Interdisciplinary Centre. Sid co-founded the New Interfaces for Musical Expression conference.

Dr. Michael Lyons, Professor, Image Arts and Sciences, Ritsumeikan University. Ph.D. Physics, University of British Columbia. Michael has worked in computational neuroscience, pattern recognition, cognitive science, and interactive arts. He was a Research Fellow in Computation and Neural Systems at the California Institute of Technology (1992/3), and a Lecturer and Research Assistant Professor at the University of Southern California (1994/96). From 1996-2007 he worked as a Senior Research Scientist at the Advanced Telecommunications Research International Labs in Kyoto, Japan. He joined the newly established College of Image Arts and Sciences, Ritsumeikan University, as a Full Professor, in April 2007. Michael co-founded the New Interfaces for Musical Expression conference.

Comments are closed.