Participants 2023
Auditory Virtual Observatory
AI Song Contest 2023 / participants
TEAM / Auditory Virtual Observatory
SONG / Spectral Gymnopédie
TEAM MEMBERS / Adrian García Riber, Francisco Serradilla
About the TEAM
Adrián García Riber is a Ph.D. student in Music and Technology at Universidad Politécnica de Madrid under the supervision of Francisco Serradilla. Together they are exploring the possibilities of Deep Learning and Neural networks for the sonification of astronomical data. Their project aims at providing sonification strategies, based on artificial intelligence and machine learning, that could be used in a proposal of an auditory virtual observatory. Adrián is head of the Audiovisual department at Escola Superior de Música de Catalunya. He holds a M.S. Degree in music research from the Universidad Internacional de La Rioja, and a S.B. in sound and image engineering from Universidad Politécnica de Madrid. Francisco is Ph.D. and Professor in Computer Science at Universidad Politécnica de Madrid. He has been researching in the fields of Intelligent transportation systems and artificial intelligence since 1987.
About the SONG
Spectral Gymnopédie is an autonomous musical composition generated by the conversion of the light spectra of real sky objects into four-note chords. With a minimal block edition aimed at repeating its musical leitmotiv, the piece is driven by the stellar spectra of the MILES library from the Spanish Virtual Observatory, that were obtained with the 2.5m Isaac Newton Telescope at the Roque de Los Muchachos Observatory.
An autonomous composition system developed by the team and based on Deep Learning and Neural Networks, makes it possible to generate sequences of chords from the stellar spectra and to present them in a musical form. Two parallel networks are trained respectively with the MILES stellar library and with the MAESTRO v3.0.0-midi dataset. The final sequence is obtained with the activation input of Erik Satie’s Gymnopédie No.1.
About the HUMAN-AI PROCESS
Our autonomous composition system has two parallel blocks that are finally compared to generate an autonomous spectra driven music sheet.
The first block analyzes the stellar spectra one by one using a variational autoencoder and converts the MILES library into a (not sounding) score of “stellar chords”. A variational autoencoder is a Deep Learning model used to reduce the dimension of the data from 4367 points to 4 values, that are converted to note frequencies for generating each “stellar chord”.
On the other hand, we train a Neural Network (LSTM with attention) with the MAESTRO MIDI dataset to generate another (not sounding) score of “musical chords” based on the activation input of a user selected MIDI score (in this case, Satie’s Gymnopédie No.1).
Finally, an algorithm based on the Pitch Class Set Theory (reduction method used in musical analysis), compares each “musical chord” with the “stellar chords” looking for matches. The final sounding score is generated with the matching “stellar chords” which duration depends on the median light flux of its respective spectrum. All the details of the system are published in our article AI-rmonies of the Spheres (Springer 2023).
Once the autonomous composition is generated, the human process begins. The instruments, sounds and textures are carefully chosen by the team to soften the hardest harmonies and to allow interpreters to play it live if wanted. A minimal edition is finally done before rendering to give the generated material a sense of piece.