Mainnavigation

      • DE
      • EN
    • Watchlist
    • Menu Menu
    You are here:
    1. Research
    2. Institute for Computer Music and Sound Technology
    3. TENOR Zurich
    More: TENOR Zurich

    Saturday, 6 April

    • Information Desk
    • Installations
    • 9:30-10:00h
    • 10:00-11:00h
    • 11:00-12:20h
    • 12:20-13:30h
    • 13:30-15:00
    • 15:00-15:30h
    • 15:30-16:30h
    • 16:30โ€“17:30h
    • 18:30h
    • 20:00h

    Information Desk

    • Opening Hours Information Desk โ€“ Main Entrance Hall, Hรถrsaal 1 (3.K01, Ebene 3)

      Thursday, 4 April: 8:00h โ€“ 11:00h
      Friday, 5 April: 9:00h โ€“ 10:00h
      Saturday, 6 April: 9:00h โ€“ 10:00h

      For registration outside of the opening hours of the information desk, please contact Leandra Nussbaumer at the conference office & lounge at Kaskadenfoyer (5.K04, Ebene 5).

    Installations

    • Installations running through-out the conference โ€“ various locations

      Running through-out the conference โ€“ Kunstraum (5.K12, Ebene 5)
      Neutral Friend, Unknown Enemy
      Installation by Juan Manuel Escalante

      The Generation of Maps
      Installation by Juan Manuel Escalante

      Running through-out the conference โ€“ Aktionsraum (5.K06, Ebene 5)
      Tres avatares del silencio: Antiฬgona en su loฬgica de ensuenฬƒo (2024)
      Installation by Mauricio A. Meza Ruiz
      Performance: Saturday 6 April, 18:30h

      Running through-out the conference โ€“ ICST-Kompositionsstudio (3.D02, Ebene 3)
      Study for a Cosmic City
      Installation by Julian Scordato

    9:30-10:00h

    Coffee โ€“ Hรถrsaal 1 (3.K01, Ebene 3)

      10:00-11:00h

      • Keynote: Philippe Esling โ€“ Hรถrsaal 1 (3.K01, Ebene 3)

        Philippe Esling
        Associate Professor, Artificial Creative Intelligence and Data Science (ACIDS) group, Sound Analysis / Synthesis team, Institute of Research and Coordination Acoustics / Music (IRCAM), CNRS UMR 9912 STMS, Sorbonne Universitรฉ Paris - Invited professor, The University of Tokyo, Japan.

        AI in 64Kb: can we do more with less ?
        Over the very recent years, deep generative models have taken an exponential part in our everyday life. Although it seems there is nothing out of the reach of AI systems, the often overlooked downside of deep models is their massive complexity and tremendous computation cost. The overall trend of scaling laws seems to further take us on the path of ever-growing gigantic model. However, we believe that an alternate and more fruitful path exist by going exactly in the opposite way, towards ultra-light models. This aspect is especially critical in audio applications, which heavily relies on specialized embedded hardware with real-time constraints. Hence, the lack of work on efficient lightweight deep models is a significant limitation for the real-life use of deep models on resource-constrained hardware. We show how we can attain these objectives through different recent theories (the lottery ticket hypothesis (Frankle and Carbin, 2018), mode connectivity (Garipov et al. 2018) and information bottleneck theory). We show how all of these theoretical hypotheses permeate in the research project led by the ACIDS group at IRCAM, which aims to model musical creativity by extending probabilistic learning approaches to the use of multivariate and multimodal time series. Our main object of study lies in the properties and perception of musical synthesis and artificial creativity. In this context, we experiment with deep AI models applied to creative materials, aiming to develop artificial creative intelligence. Over the past years, we developed several objects aiming to embed these researches directly as real-time object usable in MaxMSP. Our team has produced many prototypes of innovative instruments and musical pieces in collaborations with renowned composers. Hence, we will demonstrate how our research led to lightweight and embedded deep audio models, namely
        1/ Neurorack // the first deep AI-based eurorack synthesizer
        2/ FlowSynth // a learning-based device that allows to travel auditory spaces of synthesizers, simply by moving your hand
        3/ RAVE in Raspberry Pi // 48kHz real-time embedded deep synthesis

      11:00-12:20h

      • Session 4 โ€“ Hรถrsaal 1 (3.K01, Ebene 3)

        Paper Session Chair: Craig Vear

        11:00h
        Tokenization of MIDI Sequences for Transcription
        Authors: Florent Jacquemard, Masahiko Sakai, Yosuke Amagasu

        There generally exists no simple one-to-one relationship between the events of a MIDI sequence, such as note-on and note-off messages, and the corresponding music notation elements, such as notes, rests, chords, and ornaments. We propose a method for building a formal correspondence between them through a notion of tokens in an input MIDI event sequence and an effective tokenization approach based on a hierarchical representation of music scores. Our tokenization procedure is integrated with an algorithm for music transcription based on parsing wrt a weighted tree grammar. Its effectiveness is shown in examples.

        11:20h
        Engraving Oriented Joint Estimation of Pitch Spelling and Local and Global Keys
        Authors: Augustin Bouquillard, Florent Jacquemard

        We revisit the problems of pitch spelling and tonality guessing with a new algorithm for their joint estimation from a MIDI file including information about the measure boundaries. Our algorithm does not only identify a global key but also local ones all along the analyzed piece. It uses Dynamic Programming techniques to search for an optimal spelling in terms, roughly, of the number of accidental symbols that would be displayed in the engraved score. The evaluation of this number is coupled with an estimation of the global key and some local keys, one for each measure. Each of the three types of information is used for the estimation of the other, in a multi-step procedure. An evaluation conducted on a monophonic and a piano dataset, comprising 216 464 notes in total, shows a high degree of accuracy, both for pitch spelling (99.5% on average on the Bach corpus and 98.2% on the whole dataset) and global key signature estimation (93.0% on average, 95.58% on the piano dataset). Designed originally as a backend tool in a music transcription framework, this method should also be useful in other tasks related to music notation processing.

        11:40h
        Morton Feldman's โ€œProjections One to Fiveโ€ โ€“ Exploring a Classical Avant-Garde Notation by Mathematical Remodelling
        Authors: Markus Lepper, Baltasar Trancรณn y Widemann

        The compositions Projection 1 to Projection 5 by Morton Feldman are an important milestone in the application of graphical notation. The meta-language tscore allows easy construction of a computer model of the original scores. On this model, automated performance, graphical rendering, and different analyses can be applied. The practical implementation work brings up the peculiarities of the original notational meta-model and scores, which, without this effort, are easily overlooked.

        12:00h
        DJster Revisited โ€“ A Probabilistic Music Generator in the Age of Machine Learning
        Author: Georg Hajdu

        DJster is a probabilistic generator for musical textures based on Clarence Barlowโ€™s legacy program Autobusk, further developed by Hajdu since 2008. The 2023 revision for Max and Ableton Live includes new features improving the versatility of the application and enabling data exchange between the synchronous and asynchronous incarnations. The synchronous incarnation of DJster can be used to preview a texture to be further developed as a sketch in the asynchronous one. DJster allows the real-time addition and modification of tonal and metric profiles departing from Barlowโ€™s original fixed-input paradigm. This motivated an exploration of metric interpolations by means of self-organizing maps and an extension of Jean-Claude Rissetโ€™s illusion of an ever-accelerating rhythm. Furthermore, the implementation of a novel melodic cohesion parameter allows transitions from a sequence of events to a probabilistic process, the latter being the original modus operandi. Finally, DJster as a style-agnostic music generator can be embedded in machine-learning contexts to make user interaction a more rich and intuitive experience. 

      12:20-13:30h

      Lunch โ€“ Konzertfoyer (7.K500, Ebene 7)

      Lunch is provided by a caterer.

        13:30-15:00

        • Session 5 & TENOR 2025 โ€“ Hรถrsaal 1 (3.K01, Ebene 3)

          Paper Session Chair: Cat Hope

          13:30h
          Sound Synthesis Notation Applied to Performance: Two Case Studies
          Authors: Pierre-Luc Lecours, Nicolas Bernier

          This article investigates the specificities of music writing and interpretation on the modular synthesizer. Based on two musical notation experiments, it will discuss the issues first from the point of view of the composer and then from that of the performer. This article will begin by presenting the notation approach in the composition of Pierre-Luc Lecoursโ€™s piece Poussiรจre de soleil (2022) performed by Ensemble dโ€™oscillateurs. Then it will analyze the stages involved in creating an interpretation of Nicolas Bernierโ€™s composition Transfer for 10 monophonic synthesizers (2022). These two experiences revealed issues and strategies used when writing and interpreting a piece with modular synthesizers, pointing toward a notation framework for this instrument.

          13:50h
          EMA: An Analytical Framework for the Identification of Game Elements in Gamified Screen-Score Works
          Authors: Takuto Fukuda, Paul Turowski

          Gamified compositions โ€“ music involving game elements (e.g., avatars and life points) โ€“ have been booming in the field of interactive computer music. However, only a few studies have addressed which game elements engender the sense of playfulness in performer-computer interactions in music. This gap may exist because existing analytical frameworks primarily focus on identifying game elements in consumer products rather than musical compositions. To address the lack of analytical frameworks for gamified musical works, this paper proposes the Expanded Motivational Affordances (EMA) model as an analytical framework for identifying game elements in gamified screen score works. Through an analysis of Super Colliders by T. Fukuda and SQ2 by P. Turowski as case studies, this paper provides a comprehensive list of game elements and discusses what motivational needs for performers these elements satisfy. The EMA model with the resulting list of game elements aims to assist composers in gaining a better understanding of performer-computer interactions in gamified screen-score works. It enables composers to analyze and design such interactions more effectively in their future compositions, enhancing the overall experience for performers.

          14:10h
          TABstaff+: A Hybrid Music Notation System for Grid-Based Tangible User Interfaces (TUIs) and Graphical User Interfaces (GUIs)
          Authors: Lawrence Wilde, Charles White

          TABstaff+ is a hybrid music notation developed for grid-based user interfaces. The system builds on notational elements and conventions of tablature, standard five-line staff notation, and chord diagrams. TABstaff+ strives to facilitate teaching and learning, composition and production, and performance using grid-based tangible user interfaces (TUIs) and graphical user interfaces (GUIs). For usability testing, the study involved seven participants, music production and composition students (ages 13 to 19) with prior musical experience. The paper considers the Ableton Push instrument to illustrate the application and adaptability of the TAB+, Staff+, and Charts+ notation systems. These notation systems aim to further the development of postdigital practices by leveraging Human-Computer Interaction (HCI) and pre-digital practices of reading, playing, and teaching music using instruments and notation. TABstaff+ aims to be a transferable music notation system that allows educators and practicing musicians to utilize the pedagogical and creative capabilities of musical grid interfaces.

          14:30h
          Announcement TENOR 2025

        15:00-15:30h

        Coffee Break

          15:30-16:30h

          • Workshop 5 โ€“ Konferenzraum (5.K03, Ebene 5)

            Quo and Beyond: Live-Electronics, Realtime and Non-Realtime Work With Common Lisp and SVG
            Author: Orm Finnendahl

            The workshop will be a practical demonstration of two computer-based systems: Quo and a custom-built system which both integrate algorithmic composition for instruments with real-time (live-)electronics. Both systems share the inclusion of a tightly integrated graphical, notation like representation of sound data, the second using the SVG file format. While Quo targets the development process of performances with instruments and live electronics, encouraging a collaborative practice between performer and composer, the custom-built system is more targeted at an explorative workflow of the compositional process of a composer/author and allows for graphical editing, transformation and selective simultaneous playback of heterogenous types of DSP objects within the same document in real-time for the development of artistic works in a wide range of performative areas.

          16:30โ€“17:30h

          • Workshop 6 โ€“ Galerie 1 (4.K13, Ebene 4)

            Symbol-Body: Graphic Notation in Vocal Music Theatre, Workshop & World Premiere
            Authors: Miika Hyytiรคinen, Lisa Fornhammar, Annika Fuhrmann

            In our workshop, we present Soune, a graphic notation tailored for the demands of vocal music and transdisciplinary music theatre. Our artistic and academic team discusses central compositions in vocal music of the last century and how notation is used for text, registers, breathing, and timbre. In Soune, a new addition to the notation family, these ideas are developed further to build bridges between visual, auditive, and textual information. In the future, it could allow machine learning to be utilised in a multifaceted manner to create new music theatre. In addition, Soune has interesting potential applications in pedagogy and musical analysis. During the workshop, the audience can experiment with Soune as composers and performers and even experience it in full artistic context: The World premiere of Symbol-Body, an embodied vocal multimedia performance. Our artistic input and the audienceโ€™s testing will lead to a discussion about the future of notation in vocal music.

          18:30h

          • Concert 3 โ€“ Aktionsraum (5.K06, Ebene 5) & Konzertsaal 1 (7.K05, Ebene 7)

            Programme

            - Location: Aktionsraum 5.K06 - 

            Mauricio A. Meza Ruiz โ€“ Tres avatares del silencio: antiรญgona en su lรณgica de ensueรฑo (2024)
            performance in installation for projection, piano automata and multichannel sound system

            - Location: Konzertsaal 1 -

            Louise Devenish and Stuart James โ€“ Liquidities (2023)
            for vibraphone, slinky and electronics
            Aya Masui, vibraphone

            Se-Lien Chuang and Andreas Weixler โ€“ Die Schรถnheit der Vergรคnglichkeit (2024)
            for interactive audiovisual comprovisation for C2S2 - Chinese Calligraphic Scenic Score
            Kornyushin Nikita, bass clarinet
            Se-Lien Chuang, interactive visuals, vocal & C2S2
            Andreas Weixler, e-guitar & audio realtime processing

            Vijay  Thillaimuthu โ€“ VectorCloud (2024)
            a live interactive audio-visual score in quadraphonic sound
            Vijay Thillaimuthu, modular synthesiser, electronics & laptop

            Orm Finnendahl โ€“ Letzte Worte II (2020)
            for two flutes and live-electronic
            duet 2.26 โ€“ Hรจctor Rodrรญguez Palacios and Clara Giner Franco, flute
            Orm Finnendahl, sound direction/electronics

            Nicola Privato and Giacomo Lepri โ€“ Magnetologues (2023)
            for two stacco, neural synthesis and ambisonics
            Nicola Privato, Stacco & live electronics
            Giacomo Lepri, Stacco & live electronics

            Sound engineering
            Leandro Gianini, sound engineer
            Milena Winter, sound engineer

          20:00h

          Apรฉro riche โ€“ Konzertfoyer (7.K500, Ebene 7)