Mikhail Malt and me will present our next paper at EMS Conference in June 2014 at Berlin.
This presentation will be done in two parts.
1) Sound Analysis and Representations
Musicologists use various types of sound representations to analyze electroacoustic music:
- The waveform and the sonogram are a good basis to explore and navigate in one or more audio files. They also allow to estimate time and spectral frame for sounds. Several software like Audiosculpt, SPEAR or TIAALS allow filtering operations to isolate a sound or a group of sounds to study their properties.
- Differential sonogram or layers of sonograms are good tools to observe global parameters of sound or music. They also highlight break of spectrum, dynamics profiles, or space motions by comparison of sound channels.
- The similarity matrix reveals structural patterns, recurrence in several sound parameters, or musical characteristics. This representation completes sonogram to explore global form or complex micro-structures.
- Audio descriptor extraction helps the listening to identify global morphologies, transitions, and articulations. One of the main problems using low-level audio descriptors being the redundancy of information among them. Many of them being correlated and bringing the same information. The first step to work with audio descriptors is to reduce the dimensionality of the analytical data space and find what features are useful to describe the audio phenomena we are focusing on. With this main goal, we would like also to present a tool intended to musicologists, that will help in the analytical workflow and in the proper audio descriptors choice.
2) Moving to Analytical Representations
These different types of acoustic representations are the basic tools to explore and extract information to complete aural analysis. In the other side, researchers create musical representations during analytical process. From structural representations to paradigmatic charts or typological maps, the goal of musical representations is to explore masked relations between sounds (paradigmatic level), micro-structures (syntagmatic level) or external significations (referential level). Researchers also need representations to present their works. To do that, they create graphic representations associated with sound or video to have more intuitive examples.
Relation between both types of representations — acoustics and musical — often consists to associate them through panes or layers’ software. Transferring information between them or extracting information from acoustic representation to create analytical graphics are complex operations. They need to read acoustic representations, filter no significant parts, create a pre-representations, and associate them to other information to create analytical representations. To realize these operations, there are two main categories of software. The Acousmographe was developed to draw graphic representations guided by simple acoustic analysis. The second generation, represented by EAnalysis (De Montfort University) and TIAALS (Huddersfield University) improve features of Acousmographe with analytical tools to explore the sound, work with other types of data, or focus on musical analysis.
This presentation will explore methods to improve these techniques and propose some new research directions for the next generation of software. Musical examples are extracted from Entwurzelt by Hans Tutschku for six voices and electronics.
Pierre Couprie, “Eanalysis : aide à l’analyse de la musique électroacoustique”, Journées d’Informatique Musicale, 2012, p. 183–189.
Pierre Couprie, “Improvisation électroacoustique: analyse musicale, étude génétique et prospectives numériques”, Revue de musicologie, 98(1), 2012, p. 149–170.
Mikhail Malt, Emmanuel Jourdan, “Le ‘BSTD’ – Une représentation graphique de la brillance et de l’écart type spectral, comme possible représentation de l’évolution du timbre sonore”, International conference L’ANALYSE MUSICALE AUJOURD’HUI, Crise ou (r)évolution ? proceedings, Stransbourg University/SFAM, 19-21 november 2009.
Mikhail Malt, Emmanuel Jourdan, “Real-Time Uses of Low Level Sound Descriptors as Event Detection Functions”, Journal of New Music Research, 40(3), 2011, p. 217-223.