New Forms of Representation to Listen, Analyze, and Create Electroacoustic Music

Capture d’écran 2014-05-05 à 09.55.44During the next European Music Analysis Conference, I will present a paper on link between listening, analysis, and creation. This paper is included in Listening to electroacoustic music through analysis session.

Abstract

Analysis uses representation to detect and demonstrate paradigmatic or syntagmatic links in musical material. For several years, the particular case of electroacoustic music adds new digital technics to create complex and interactive representations. These representations show potential links between analysing, modeling, and creating electroacoustic music. Indeed, EAnalysis[1] and TIAALS[2] software explore interactive representations through charts of typology, paradigmatic links, or any other organizations of sound and music that break traditional time/frequency graphic representations. They also show the possibility to associate various types of files like audio, video, or data from other software. Features of these software allow the listener to navigate inside the work in different ways and are closely to creative processes. In the other side, software developed for musical creation can be used in analysis. Last versions of Audiosculpt[3] software gives the possibility to use audio descriptor or similarity matrix. Audio descriptors are used for several years to analyze large bank of sounds (MIR) and detect regularities or extract various acoustical characteristics. Audio descriptors can be used in musical analysis to discover and analyze global morphologies, breaks of spectrum, regularities, transitions, or articulations of sound material. Similarity matrix is perfect to visualize musical form from sonogram and links between different levels of structures. The piece of software CataRT[4] uses audio descriptors to create playing map of sounds for musical improvisation. This map can focus on audio descriptors without time representation to create a sort of genetic map of any audio file.

These examples of software from both sides demonstrate how links between analysis and musical creation are strong in recent electroacoustic music researches. This paper will present these technologies based on musical representations through different examples of creative processes and analysis of electroacoustic music.

[1] EAnalysis is a free software for musical analysis developed by Pierre Couprie at De Montfort University (Leicester).

[2] TIAALS is a free software for musical analysis developed by Michael Clarke, Frédéric Dufeu and Peter Manning at Huddersfield University.

[3] Audiosculpt is an analysis/synthesis software developed at Ircam (Paris).

[4] CataRT is a free concatenative synthesis software developed by Diemo Schwartz at Ircam (Paris).

EAnalysis: new features

EAnalysis is a sound based analysis software developed for MTI Research Centre (De Montfort University, Leicester). Since last updates, EAnalysis contains some new important features. Download EAnalysis - Free for Macintosh OS10.6 or later

Differential sonogram

The differential sonogram shows only spectral differences, you can analyze morphologies, onsets, spectral motions, etc. diffSona2

Reference: Jean-Marc Chouvel, Jean Bresson, Carlos Agon, L’analyse musicale différentielle : principes, représentation et application à l’analyse de l’interprétation

Representation of structure

This view draws simple segmentation in several layers with various designs: linear segmentation, formal diagram, similarity matrix from segmentation, or arc diagram from pattern recognition. Here is an example with the fourth designs (from bottom to top: linear, formal diagram, arc diagram, similarity matrix): EAnalysis-structure-02

Data visualization: cloud of points

When you import data from an other piece of software (like audio descriptors from Sonic Visualiser), you can draw diagram in two axis and explore it by selecting an area: EAnalysis-data-01

Use CTRL+click to play extracts linked to each point. You can also draw more than one diagram:

points

An other example:

EAnalysis-data-02

I imagined these representations from the amazing CataRT application developed by Diemo Schwarz for musical creation.

Data visualization: complex line

This second type of view draws three datas in two curves (value, size, and color):

EAnalysis-data-03

Reference: Malt, Mikhail, Jourdan, Emmanuel, « Le « BSTD » – Une représentation graphique de la brillance et de l’écart type spectral, comme possible représentation de l’évolution du timbre sonore ».

Data visualization: shaded colors

There is also two new types of data visualization: horizontal graph (like a waveform) or gradient like in this example (top): gradient

Data hierarchical correlation

I added an other type of view to visualize hierarchical correlation between two set of data:

corr

Reference: Hierarchical Correlation Plots

Analytical events

Lasse Thoresen system of representation is under implementation. The first step is realized: form-building and time-field. Typomorphology will be available for the fall. EAnalysis-thoresen-01

Seminar on Music Notation & Computation

Mon. June 30th, 2014 - 3:00pm – 5:30pm
Centre for Digital Music - School of Electronic Engineering and Computer Science
Queen Mary University of London
Engineering Building – Room ENG 209
Free access

This seminar takes place in the series of events organized by the AFIM Work Group on music notation issues. The group objectives is is to make an assessment of the music score mutations, induced by the contemporary practices, and to report on the state of the art of software tools for music notation. 

The seminar is oriented to notation tools for music computation and performance.
Preliminary program

  • 3:00PM - Introduction: technology and research for music notation - AFIM work groupe Les nouveaux espaces de la notation musicale - Dominique Fober (GRAME, Lyon), Pierre Couprie (IReMus, Université de Paris-Sorbonne), Yann Geslin (INA / GRM, Paris), Jean Bresson (IRCAM UMR STMS, Paris)
  • 3:20PM - Sequencing and score following for interactive music - Thomas Coffy, Arshia Cont, Jean-Louis Giavitto (IRCAM/INRIA/CNRS – UMR STMS, Paris)
  • 3:40PM - Tempo pattern representation for expressive performances - Shengchen Li (Queen Mary University of London)
  • 4:10PM - Interactive XVII-XVIII century spanish music notation - David Rizo (Department of Software and Computing Systems, University of Alicante)
  • 4:30PM - Can score design affect the readability of music? - Arild Stenberg (Faculty of Music, University of Cambridge)
  • 4:50PM - How can dynamic score markings relate to dynamic changes? - Katerina Kosta (Queen Mary University of London)

Organisation

  • Groupe de travail AFIM Les nouveaux espaces de la notation musicale - Jean Bresson, Pierre Couprie, Dominique Fober, Yann Geslin.
  • Richard Hoadley, Anglia Ruskin University, Cambridge.
  • Elaine Chew, Jordan Smith, Centre for Digital Music, Queen Mary University of London
More information at

Interactive Music Notation and Representation Workshop@NIME 2014

Mon. June 30th, 2014 – 9:30 to 13:00
Goldsmiths, University of London

Computer music tools for music notation have long been restricted to conventional approaches and dominated by a few systems, mainly oriented towards music engraving. During the last decade and driven by artistic and technological evolutions, new tools and new forms of music representation have emerged. The recent advent of systems like Bach, MaxScore or INScore (to cite just a few), clearly indicates that computer music notation tools have become mature enough to diverge from traditional approaches and to explore new domains and usages such as interactive and live notation.
The aim of the workshop is to gather artists, researchers and application developers, to compare the views and the needs inspired by contemporary practices, with a specific focus on interactive and live music, including representational forms emerging from live coding. Special consideration will be given to new instrumental forms emerging from the NIME community.
Registration
Workshop attendees must register for NIME for at least one of Tuesday, Wednesday or Thursday (see here) but don’t pay for the workshop day. You must send an email to dfober@gmail.com with your coordinates to be registered to the workshop itself.
Preliminary program

  • Animated Notation Dot Com: 2014 Report - Ryan Ross Smith
  • Timelines in Algorithmic Notation  - Thor Magnusson
  • Breaking the Notational Barrier: Liveness in Computer Music - Chris Nash
  • Quid Sit Musicus: Interacting with Calligraphic Gestures - J. Garcia, G. Nouno, P. Leroux
  • Non-Visual Scores for Ensemble Comprovisation - Sandeep Bhagwati
  • Interactive and real-time composition with soloists and music ensembles - Georg Hajdu
  • A javascript library for collaborative composition of lead sheets - D. Martín, F. Pachet
  • (Pre)compositional strategies and computer-generated notation in surface/tension (2012) for oboe and piano or ensemble - Sam Hayden
  • On- and off-screen: presentation and notation in interactive electronic music - Pete Furniss
  • John Cage Solo for Sliding Trombone, a Computer Assisted Performance approach - B. Sluchin, M. Malt
  • Deriving a Chart-Organised Notation from a Sonogram Based Exploration: TIAALS (Tools for Interactive Aural Analysis) - M. Clarke, F. Dufeu, P Manning
Organisation

  • Groupe de travail AFIM Les nouveaux espaces de la notation musicale - Jean Bresson, Pierre Couprie, Dominique Fober, Yann Geslin.
  • Richard Hoadley, Anglia Ruskin University, Cambridge.
More information at

Representation: From Acoustics to Musical Analysis

Capture d’écran 2014-05-05 à 09.47.41Mikhail Malt and me will present our next paper at EMS Conference in June 2014 at Berlin.

Abstract

This presentation will be done in two parts.

1) Sound Analysis and Representations

Musicologists use various types of sound representations to analyze electroacoustic music:

  • The waveform and the sonogram are a good basis to explore and navigate in one or more audio files. They also allow to estimate time and spectral frame for sounds. Several software like Audiosculpt, SPEAR or TIAALS allow filtering operations to isolate a sound or a group of sounds to study their properties.
  • Differential sonogram or layers of sonograms are good tools to observe global parameters of sound or music. They also highlight break of spectrum, dynamics profiles, or space motions by comparison of sound channels.
  • The similarity matrix reveals structural patterns, recurrence in several sound parameters, or musical characteristics. This representation completes sonogram to explore global form or complex micro-structures.
  • Audio descriptor extraction helps the listening to identify global morphologies, transitions, and articulations. One of the main problems using low-level audio descriptors being the redundancy of information among them. Many of them being correlated and bringing the same information. The first step to work with audio descriptors is to reduce the dimensionality of the analytical data space and find what features are useful to describe the audio phenomena we are focusing on. With this main goal, we would like also to present a tool intended to musicologists, that will help in the analytical workflow and in the proper audio descriptors choice.

2) Moving to Analytical Representations

These different types of acoustic representations are the basic tools to explore and extract information to complete aural analysis. In the other side, researchers create musical representations during analytical process. From structural representations to paradigmatic charts or typological maps, the goal of musical representations is to explore masked relations between sounds (paradigmatic level), micro-structures (syntagmatic level) or external significations (referential level). Researchers also need representations to present their works. To do that, they create graphic representations associated with sound or video to have more intuitive examples.

Relation between both types of representations — acoustics and musical — often consists to associate them through panes or layers’ software. Transferring information between them or extracting information from acoustic representation to create analytical graphics are complex operations. They need to read acoustic representations, filter no significant parts, create a pre-representations, and associate them to other information to create analytical representations. To realize these operations, there are two main categories of software. The Acousmographe was developed to draw graphic representations guided by simple acoustic analysis. The second generation, represented by EAnalysis (De Montfort University) and TIAALS (Huddersfield University) improve features of Acousmographe with analytical tools to explore the sound, work with other types of data, or focus on musical analysis.

This presentation will explore methods to improve these techniques and propose some new research directions for the next generation of software. Musical examples are extracted from Entwurzelt by Hans Tutschku for six voices and electronics.

Selective bibliography

Pierre Couprie, “Eanalysis : aide à l’analyse de la musique électroacoustique”, Journées d’Informatique Musicale, 2012, p. 183–189.

Pierre Couprie, “Improvisation électroacoustique: analyse musicale, étude génétique et prospectives numériques”, Revue de musicologie, 98(1), 2012, p. 149–170.

Mikhail Malt, Emmanuel Jourdan, “Le ‘BSTD’ – Une représentation graphique de la brillance et de l’écart type spectral, comme possible représentation de l’évolution du timbre sonore”, International conference L’ANALYSE MUSICALE AUJOURD’HUI, Crise ou (r)évolution ? proceedings, Stransbourg University/SFAM, 19-21 november 2009.

Mikhail Malt, Emmanuel Jourdan, “Real-Time Uses of Low Level Sound Descriptors as Event Detection Functions”, Journal of New Music Research, 40(3), 2011, p. 217-223.

Interactive Music Notation and Representation Workshop

nime14_ldn_r2June 30 2014 – Goldsmiths, University of London, London, UK
www.nime2014.org

Call for participation

  • Submission Deadline: May 16, 2014 (No extensions possible!)
  • Notification: May 30, 2014
  • Workshop date: June 30, 2014

Computer music tools for music notation have long been restricted to conventional approaches and dominated by a few systems, mainly oriented towards music engraving. During the last decade and driven by artistic and technological evolutions, new tools and new forms of music representation have emerged. The recent advent of systems like Bach, MaxScore or INScore (to cite just a few), clearly indicates that computer music notation tools have become mature enough to diverge from traditional approaches and to explore new domains and usages such as interactive and live notation.

You are invited to participate to this session about music notation, which will consist of both informal discussions and short presentations / demonstrations. If you would like to propose a presentation or a demonstration, send a one page abstract before May 16 to dfober@gmail.com. A 20 minutes time slot will be allocated to each accepted proposal. Look at http://tiny.cc/u27hex for a proposal template.

The aim of the workshop is to gather artists, researchers and application developers, to compare the views and the needs inspired by contemporary practices, with a specific focus on interactive and live music, including representational forms emerging from live coding. Special consideration will be given to new instrumental forms emerging from the NIME community.

The workshop will be held in two parts:

  • Overview of the notation history, tools, uses and problematics: Building a map of the different approaches, in interaction with invited participants and with the audience will be the main focus of this part. Online tools for mind mapping will be made available to allow remote participation.
  • Practices and applications: This second part is intended to take advantage of the NIME context to involve, discover and question the notation issues related to new instruments, including also live coding perspectives. For this part, we are interested in artistic experiences as well as in technical approaches. Demonstrations will be welcome, whether based on tools or new instruments, including electronics.

You are invited to participate to this session about music notation, which will consist of both informal discussions and short presentations / demonstrations. If you would like to propose a presentation or a demonstration, send a one page abstract before May 16 to dfober@gmail.com. A 20 minute time slot will be allocated to each accepted proposal. Look at http://tiny.cc/u27hex for a proposal template.

Organisers

  • Dominique Fober – Grame – Lyon
  • Jean Bresson – Ircam – Paris
  • Pierre Couprie – IReMus, Université Paris-Sorbonne – Paris
  • Yann Geslin – INA/GRM – Paris
  • Richard Hoadley – Anglia Ruskin University – Cambridge

Contact

Please send enquiries to dfober@gmail.com

Archipels, création musicale électroacoustique improvisée

phono_plaisir_prop_5B1 - copieVendredi 2 mai à 20h30 , Église Saint-Pierre de Plaisir

Les Phonogénistes pratiquent l’improvisation électroacoustique depuis une quinzaine d’années, souvent en association avec d’autres formes d’expression artistique partageant la même liberté et le même plaisir de la recherche.

Archipels est une création improvisée les associant à deux autres musiciens et un électroacousticien. Les technologies audiovisuelles les plus avancées seront confrontées à un des plus anciens instruments de musique, que l’on peut considérer comme un des premiers “synthétiseur” acoustique : l’orgue, et à son lointain cousin portable et populaire : l’accordéon.

Les sons de l’orgue et de l’accordéon résonneront avec leurs doubles électroacoustiques spatialisés. Les musiciens, répartis dans l’espace du concert comme autant d’îles dans l’océan, engendreront des mouvements sonores autour du public.

Les phonogénistes :
Laurence Bouckaert : karlax
Pierre Couprie : flûte augmentée
Francis Larvor : laptop, contrôleur
Jean-Marc Chouvel : orgue
Olivier Innocenti : accordéon bayan et eigenharp

Performance avec ONE

Je serai en performance avec ONE les 1er, 2 et 3 avril à l’église St Merri (Paris).

Renseignements

Capture d’écran 2014-03-16 à 13.13.52

ONE est né d’une rencontre qui ne doit rien au hasard : celle de musiciens venus d’horizons très différents mais partageant un même intérêt pour la musique électroacoustique et le geste musical.

Chaque musicien s’est fabriqué son propre «instrumentarium» composé en majorité d’interfaces de jeux vidéos détournées (joystick, gamepad, tablette graphique et tactile). ONE est à la fois performance musicale et rétinienne. La diffusion d’objets sonores en panora- mique est ponctuée d’effets visuels. Générées en temps réel par les instruments, des formes graphiques viennent amplifier l’écoute. Improvisée ou écrite, la musique de ONE est toujours jouée en direct. Ici l’on joue avec le son, mais aussi pour le son. La musique de ONE sculpte des paysages sonores contrastés parcourus de timbres instrumentaux, de textures électroniques évoquant parfois des sons de la nature.

L’œil écoute

Conférence donnée à l’église St Merri (Paris) dans le cadre du festival Ecouter / voir Voir la musique, écouter les images

1er avril 2014, 20h00 – 20h30
Organisé par Puce Muse

Au concert, l’œil a toujours été un allié pour l’oreille : la reconnaissance des instruments, les gestes des musiciens, l’expression du visage des chanteurs sont autant d’indices qui permettent de mieux comprendre la musique. De son côté, le musicologue créé aussi différents types d’images pour analyser et transmettre son travail : représentation des structures musicales, du rôle de chaque instrument ou de la relation entre un film et sa musique. Depuis quelques années, les chercheurs utilisent des représentations animées permettant de visualiser très précisément des aspects complexes de la musique : caractérisation des différents sons à travers leur spectre, récurrences dans les structures ou mouvements des sons dans l’espace. Cette conférence présentera d’une manière simple quelques exemples des techniques de représentation musicale utilisées actuellement en musicologie pour analyser la musique électroacoustique.

Renseignements

1939791_595331797225089_607535840_n