New Forms of Representation to Listen, Analyze, and Create Electroacoustic Music

Capture d’écran 2014-05-05 à 09.55.44During the next European Music Analysis Conference, I will present a paper on link between listening, analysis, and creation. This paper is included in Listening to electroacoustic music through analysis session.

Abstract

Analysis uses representation to detect and demonstrate paradigmatic or syntagmatic links in musical material. For several years, the particular case of electroacoustic music adds new digital technics to create complex and interactive representations. These representations show potential links between analysing, modeling, and creating electroacoustic music. Indeed, EAnalysis[1] and TIAALS[2] software explore interactive representations through charts of typology, paradigmatic links, or any other organizations of sound and music that break traditional time/frequency graphic representations. They also show the possibility to associate various types of files like audio, video, or data from other software. Features of these software allow the listener to navigate inside the work in different ways and are closely to creative processes. In the other side, software developed for musical creation can be used in analysis. Last versions of Audiosculpt[3] software gives the possibility to use audio descriptor or similarity matrix. Audio descriptors are used for several years to analyze large bank of sounds (MIR) and detect regularities or extract various acoustical characteristics. Audio descriptors can be used in musical analysis to discover and analyze global morphologies, breaks of spectrum, regularities, transitions, or articulations of sound material. Similarity matrix is perfect to visualize musical form from sonogram and links between different levels of structures. The piece of software CataRT[4] uses audio descriptors to create playing map of sounds for musical improvisation. This map can focus on audio descriptors without time representation to create a sort of genetic map of any audio file.

These examples of software from both sides demonstrate how links between analysis and musical creation are strong in recent electroacoustic music researches. This paper will present these technologies based on musical representations through different examples of creative processes and analysis of electroacoustic music.

[1] EAnalysis is a free software for musical analysis developed by Pierre Couprie at De Montfort University (Leicester).

[2] TIAALS is a free software for musical analysis developed by Michael Clarke, Frédéric Dufeu and Peter Manning at Huddersfield University.

[3] Audiosculpt is an analysis/synthesis software developed at Ircam (Paris).

[4] CataRT is a free concatenative synthesis software developed by Diemo Schwartz at Ircam (Paris).

Seminar on Music Notation & Computation

Mon. June 30th, 2014 - 3:00pm – 5:30pm
Centre for Digital Music - School of Electronic Engineering and Computer Science
Queen Mary University of London
Engineering Building – Room ENG 209
Free access

This seminar takes place in the series of events organized by the AFIM Work Group on music notation issues. The group objectives is is to make an assessment of the music score mutations, induced by the contemporary practices, and to report on the state of the art of software tools for music notation. 

The seminar is oriented to notation tools for music computation and performance.
Preliminary program

  • 3:00PM - Introduction: technology and research for music notation - AFIM work groupe Les nouveaux espaces de la notation musicale - Dominique Fober (GRAME, Lyon), Pierre Couprie (IReMus, Université de Paris-Sorbonne), Yann Geslin (INA / GRM, Paris), Jean Bresson (IRCAM UMR STMS, Paris)
  • 3:20PM - Sequencing and score following for interactive music - Thomas Coffy, Arshia Cont, Jean-Louis Giavitto (IRCAM/INRIA/CNRS – UMR STMS, Paris)
  • 3:40PM - Tempo pattern representation for expressive performances - Shengchen Li (Queen Mary University of London)
  • 4:10PM - Interactive XVII-XVIII century spanish music notation - David Rizo (Department of Software and Computing Systems, University of Alicante)
  • 4:30PM - Can score design affect the readability of music? - Arild Stenberg (Faculty of Music, University of Cambridge)
  • 4:50PM - How can dynamic score markings relate to dynamic changes? - Katerina Kosta (Queen Mary University of London)

Organisation

  • Groupe de travail AFIM Les nouveaux espaces de la notation musicale - Jean Bresson, Pierre Couprie, Dominique Fober, Yann Geslin.
  • Richard Hoadley, Anglia Ruskin University, Cambridge.
  • Elaine Chew, Jordan Smith, Centre for Digital Music, Queen Mary University of London
More information at

Interactive Music Notation and Representation Workshop@NIME 2014

Mon. June 30th, 2014 – 9:30 to 13:00
Goldsmiths, University of London

Computer music tools for music notation have long been restricted to conventional approaches and dominated by a few systems, mainly oriented towards music engraving. During the last decade and driven by artistic and technological evolutions, new tools and new forms of music representation have emerged. The recent advent of systems like Bach, MaxScore or INScore (to cite just a few), clearly indicates that computer music notation tools have become mature enough to diverge from traditional approaches and to explore new domains and usages such as interactive and live notation.
The aim of the workshop is to gather artists, researchers and application developers, to compare the views and the needs inspired by contemporary practices, with a specific focus on interactive and live music, including representational forms emerging from live coding. Special consideration will be given to new instrumental forms emerging from the NIME community.
Registration
Workshop attendees must register for NIME for at least one of Tuesday, Wednesday or Thursday (see here) but don’t pay for the workshop day. You must send an email to dfober@gmail.com with your coordinates to be registered to the workshop itself.
Preliminary program

  • Animated Notation Dot Com: 2014 Report - Ryan Ross Smith
  • Timelines in Algorithmic Notation  - Thor Magnusson
  • Breaking the Notational Barrier: Liveness in Computer Music - Chris Nash
  • Quid Sit Musicus: Interacting with Calligraphic Gestures - J. Garcia, G. Nouno, P. Leroux
  • Non-Visual Scores for Ensemble Comprovisation - Sandeep Bhagwati
  • Interactive and real-time composition with soloists and music ensembles - Georg Hajdu
  • A javascript library for collaborative composition of lead sheets - D. Martín, F. Pachet
  • (Pre)compositional strategies and computer-generated notation in surface/tension (2012) for oboe and piano or ensemble - Sam Hayden
  • On- and off-screen: presentation and notation in interactive electronic music - Pete Furniss
  • John Cage Solo for Sliding Trombone, a Computer Assisted Performance approach - B. Sluchin, M. Malt
  • Deriving a Chart-Organised Notation from a Sonogram Based Exploration: TIAALS (Tools for Interactive Aural Analysis) - M. Clarke, F. Dufeu, P Manning
Organisation

  • Groupe de travail AFIM Les nouveaux espaces de la notation musicale - Jean Bresson, Pierre Couprie, Dominique Fober, Yann Geslin.
  • Richard Hoadley, Anglia Ruskin University, Cambridge.
More information at

Representation: From Acoustics to Musical Analysis

Capture d’écran 2014-05-05 à 09.47.41Mikhail Malt and me will present our next paper at EMS Conference in June 2014 at Berlin.

Abstract

This presentation will be done in two parts.

1) Sound Analysis and Representations

Musicologists use various types of sound representations to analyze electroacoustic music:

  • The waveform and the sonogram are a good basis to explore and navigate in one or more audio files. They also allow to estimate time and spectral frame for sounds. Several software like Audiosculpt, SPEAR or TIAALS allow filtering operations to isolate a sound or a group of sounds to study their properties.
  • Differential sonogram or layers of sonograms are good tools to observe global parameters of sound or music. They also highlight break of spectrum, dynamics profiles, or space motions by comparison of sound channels.
  • The similarity matrix reveals structural patterns, recurrence in several sound parameters, or musical characteristics. This representation completes sonogram to explore global form or complex micro-structures.
  • Audio descriptor extraction helps the listening to identify global morphologies, transitions, and articulations. One of the main problems using low-level audio descriptors being the redundancy of information among them. Many of them being correlated and bringing the same information. The first step to work with audio descriptors is to reduce the dimensionality of the analytical data space and find what features are useful to describe the audio phenomena we are focusing on. With this main goal, we would like also to present a tool intended to musicologists, that will help in the analytical workflow and in the proper audio descriptors choice.

2) Moving to Analytical Representations

These different types of acoustic representations are the basic tools to explore and extract information to complete aural analysis. In the other side, researchers create musical representations during analytical process. From structural representations to paradigmatic charts or typological maps, the goal of musical representations is to explore masked relations between sounds (paradigmatic level), micro-structures (syntagmatic level) or external significations (referential level). Researchers also need representations to present their works. To do that, they create graphic representations associated with sound or video to have more intuitive examples.

Relation between both types of representations — acoustics and musical — often consists to associate them through panes or layers’ software. Transferring information between them or extracting information from acoustic representation to create analytical graphics are complex operations. They need to read acoustic representations, filter no significant parts, create a pre-representations, and associate them to other information to create analytical representations. To realize these operations, there are two main categories of software. The Acousmographe was developed to draw graphic representations guided by simple acoustic analysis. The second generation, represented by EAnalysis (De Montfort University) and TIAALS (Huddersfield University) improve features of Acousmographe with analytical tools to explore the sound, work with other types of data, or focus on musical analysis.

This presentation will explore methods to improve these techniques and propose some new research directions for the next generation of software. Musical examples are extracted from Entwurzelt by Hans Tutschku for six voices and electronics.

Selective bibliography

Pierre Couprie, “Eanalysis : aide à l’analyse de la musique électroacoustique”, Journées d’Informatique Musicale, 2012, p. 183–189.

Pierre Couprie, “Improvisation électroacoustique: analyse musicale, étude génétique et prospectives numériques”, Revue de musicologie, 98(1), 2012, p. 149–170.

Mikhail Malt, Emmanuel Jourdan, “Le ‘BSTD’ – Une représentation graphique de la brillance et de l’écart type spectral, comme possible représentation de l’évolution du timbre sonore”, International conference L’ANALYSE MUSICALE AUJOURD’HUI, Crise ou (r)évolution ? proceedings, Stransbourg University/SFAM, 19-21 november 2009.

Mikhail Malt, Emmanuel Jourdan, “Real-Time Uses of Low Level Sound Descriptors as Event Detection Functions”, Journal of New Music Research, 40(3), 2011, p. 217-223.

Interactive Music Notation and Representation Workshop

nime14_ldn_r2June 30 2014 – Goldsmiths, University of London, London, UK
www.nime2014.org

Call for participation

  • Submission Deadline: May 16, 2014 (No extensions possible!)
  • Notification: May 30, 2014
  • Workshop date: June 30, 2014

Computer music tools for music notation have long been restricted to conventional approaches and dominated by a few systems, mainly oriented towards music engraving. During the last decade and driven by artistic and technological evolutions, new tools and new forms of music representation have emerged. The recent advent of systems like Bach, MaxScore or INScore (to cite just a few), clearly indicates that computer music notation tools have become mature enough to diverge from traditional approaches and to explore new domains and usages such as interactive and live notation.

You are invited to participate to this session about music notation, which will consist of both informal discussions and short presentations / demonstrations. If you would like to propose a presentation or a demonstration, send a one page abstract before May 16 to dfober@gmail.com. A 20 minutes time slot will be allocated to each accepted proposal. Look at http://tiny.cc/u27hex for a proposal template.

The aim of the workshop is to gather artists, researchers and application developers, to compare the views and the needs inspired by contemporary practices, with a specific focus on interactive and live music, including representational forms emerging from live coding. Special consideration will be given to new instrumental forms emerging from the NIME community.

The workshop will be held in two parts:

  • Overview of the notation history, tools, uses and problematics: Building a map of the different approaches, in interaction with invited participants and with the audience will be the main focus of this part. Online tools for mind mapping will be made available to allow remote participation.
  • Practices and applications: This second part is intended to take advantage of the NIME context to involve, discover and question the notation issues related to new instruments, including also live coding perspectives. For this part, we are interested in artistic experiences as well as in technical approaches. Demonstrations will be welcome, whether based on tools or new instruments, including electronics.

You are invited to participate to this session about music notation, which will consist of both informal discussions and short presentations / demonstrations. If you would like to propose a presentation or a demonstration, send a one page abstract before May 16 to dfober@gmail.com. A 20 minute time slot will be allocated to each accepted proposal. Look at http://tiny.cc/u27hex for a proposal template.

Organisers

  • Dominique Fober – Grame – Lyon
  • Jean Bresson – Ircam – Paris
  • Pierre Couprie – IReMus, Université Paris-Sorbonne – Paris
  • Yann Geslin – INA/GRM – Paris
  • Richard Hoadley – Anglia Ruskin University – Cambridge

Contact

Please send enquiries to dfober@gmail.com

L’œil écoute

Conférence donnée à l’église St Merri (Paris) dans le cadre du festival Ecouter / voir Voir la musique, écouter les images

1er avril 2014, 20h00 – 20h30
Organisé par Puce Muse

Au concert, l’œil a toujours été un allié pour l’oreille : la reconnaissance des instruments, les gestes des musiciens, l’expression du visage des chanteurs sont autant d’indices qui permettent de mieux comprendre la musique. De son côté, le musicologue créé aussi différents types d’images pour analyser et transmettre son travail : représentation des structures musicales, du rôle de chaque instrument ou de la relation entre un film et sa musique. Depuis quelques années, les chercheurs utilisent des représentations animées permettant de visualiser très précisément des aspects complexes de la musique : caractérisation des différents sons à travers leur spectre, récurrences dans les structures ou mouvements des sons dans l’espace. Cette conférence présentera d’une manière simple quelques exemples des techniques de représentation musicale utilisées actuellement en musicologie pour analyser la musique électroacoustique.

Renseignements

1939791_595331797225089_607535840_n

Analysis of Electroacoustic Music: Contexts, Methods, Perspectives

logoConférences données dans le symposium Analysis of Electroacoustic Music: Contexts, Methods, Perspectives organisé par le groupe analyse de la musique électroacoustique et le Centre for Research in New Music de l’université d’Huddersfield

Date: 18th March 2014
Location: Huddersfield University
Time: 9.30-12.30 and 14.00-18.00
Invited guest contributors: John Dack, Michael Clarke

Renseignements

Schedule

9.45 – Introduction - Frédéric Dufeu
9.50 – From Graphic/Verbal Description to Interpretation - John Dack
10.40 – The Representation(s) of Electroacoustic Music: From acoustics to musical analysis - Pierre Couprie, Mikhail Malt
11.45 – Characterization of Individual Electric Sounds - Laurent Pottier

14.00 – 20 years of Interactive Music Software at Huddersfield – Michael Clarke
14.40 – Modelling of Digital Tools and Instruments for Composition and Performance - Alain Bonardi, Frédéric Dufeu
15.45 – Behaviour and Notation of Electroacoustic Music – Bruno Bossis
16.30 – Session: Software Developments for the Analysis of Electroacoustic Music – Alain Bonardi, Pierre Courie, Frédéric Dufeu, Mikhail Malt

La cartographie dans la création et l’analyse de la musique électroacoustique

J’interviendrai le 28 janvier à l’Université d’Avignon dans le cadre du projet CartoMuse de la Structure Fédérative de Recherche Agor@ntic.

Renseignements : http://repmus.ircam.fr/mamux/saisons/saison13-2013-2014/2014-01-28

Résumé de ma présentation

La musique électroacoustique utilise depuis longtemps la représentation comme outil de création ou d’analyse. Dans le studio, le compositeur réalise souvent des schémas ou des graphiques lui permettant de visualiser la globalité de l’œuvre. Lors de la création, la mise en espace s’appuie généralement sur une partition de diffusion, une représentation ou une liste des moments clés de l’œuvre et des figures d’espaces qui leur sont associés. Le musicien live utilise des interfaces tactiles représentant un ensemble de paramètres sous la forme de zones interactives à deux ou trois dimensions. De même, les musicologues ont développé différents types de représentations musicales permettant de réduire l’œuvre ou une partie d’elle-même afin de manipuler plus facilement des paramètres musicaux ou mettre en évidence certaines de ses caractéristiques (structure, similarités, génétique, etc.).

Isobel

Il s’avère qu’une partie de ces représentations peuvent être apparentées à des cartes ou des cartographies. En effet, elles représentent une simplification d’un processus musical local ou global afin d’en faciliter sa manipulation lors d’un processus créatif ou son analyse lors de son étude. Certaines représentations contiennent aussi de véritables cartes géographiques comme dans le cas des soudwalk ou des installations sonores. L’artiste et le musicologue choisissent les paramètres à représenter, les manières de les représenter et l’environnement qui les contient en fonction du projet musical ou de l’objectif analytique. Une des caractéristiques majeures de ces représentations est la prise en compte ou non de la dimension temporelle. Celle-ci apparaît généralement dans un des trois axes de la représentation : en x ou en y dans des représentations dites temporelles ou en z dans des représentations animées. Cette troisième dimension permet de représenter le temps localement sur des zones de la carte ou globalement en faisant correspondre entre elles les propriétés temporelles de chaque objet. Certaines représentations utilisent aussi des graphiques hors temps afin de mettre l’accent sur des similitudes ou des groupements particuliers de paramètres ou de sons. La représentation du temps s’avère donc bien plus complexe qu’elle n’y parait au premier abord.

Durant la présentation, nous fonderons notre réflexion sur des exemples concrets issus de logiciels de création, d’interfaces interactives ou de modèles analytiques afin de faire émerger le concept de cartographie dans la création et l’analyse de la musique électroacoustique. Cette réflexion nous permettra de souligner le lien essentiel entre analyse et création à travers la notion de représentation.

Symposium / Workshop Music Notation #2

60812e5bf2d41906fbd3a0e384f92bc2.media.350x496Le groupe de travail de l’AFIM Les nouveaux espaces de la notation musicale s’associe au séminaire MaMuX de l’Ircam pour une journée d’étude consacrée à la notation en théorie et composition musicale.

January 27, 2014 – 14:00-18:00PM
IRCAM, Salle Stravinsky 1, place Igor Stravinsky 75004 Paris

Programme

14h00-14h40 Marco Stroppa Quelques paradigmes sur des notations de la musique électronique
14h40-15h20 Julia Blondeau Notation et espaces compositionnels : processus et enjeux musicaux
15h20-16h00 Pavlos Antoniadis  Corporeal Navigation of Complex Notation: Embodied and extended cognition as a model for discourses and tools for complex piano music after 1950
16h00-16h20 Pause
16h20-17h00 Mike Solomon tools.py : une librarie de gravure musicale mobile
17h00-17h40 Bruno Bossis Les indications d’interactivité dans les pièces mixtes : une modification profonde des paradigmes de la notation
17h40-18h00 Discussion

 

Renseignements et programme détaillé : http://notation.afim-asso.org/doku.php/evenements/2014-01-27-etude-notation2

 

Institut de Recherche en Musicologie

Depuis le 1er janvier 2014, l’équipe d’accueil dont je fais partie, l’Observatoire Musical Français (EA206), a fusionné avec l’équipe Patrimoine et Langage Musicaux (PLM) et l’IRPMF pour former l’Institut de Recherche en Musicologie (IReMus). L’IReMus est une unité mixte de recherche (UMR 8223) associant l’université Paris-Sorbonne, le CNRS, le ministère de la Culture et la Bibliothèque Nationale de France.