Vicky Clarke: In Motion (Latent Spaces Q&A)

Share this page
FacebookTwitterEmailLinkedInShare

We are thrilled to announce the launch of LATENT SPACES, a new spatial sound installation created by Manchester-based sound artist Vicky Clarke, aka SONAMBher official In Motion 2024 project.

In Motion is our transformative artist development programme, supporting composers and music makers at pivotal moments in their careers. Over 18 months, artists receive space, funding, and expert guidance to develop their practice, culminating in the creation of a new public project. Vicky’s LATENT SPACES is one such outcome — a bold interrogation of technology, perception, and sonic transformation.

LATENT SPACES explores our perceptions of sonic, computational and cerebral spaces, considering what happens to the materiality of sound in the latent space of a neural network model. It invites the listener to enter a computational model and experience a materiality in flux.

The project culminates with a spatial sound installation premiering in October 2025, and is prefaced by the release of the companion EP, AURA MACHINE, out now on LOL Editions.

Ahead of the installation launch, Vicky sat down with Sound and Music’s Fiona Allison (Creative Programme Leader), to discuss the inspirations, technologies and questions behind LATENT SPACES.

 

Vicky Clarke: In Motion (LATENT SPACES Q&A)


What first drew you to exploring sound in latent space?

Latent space is the point in a neural network where the model trains on the data and detects patterns. It is a space where sound has lost its context and become data. It has a duality in that it is at once a statistical model devoid of emotion, but also an imaginary and alchemical space where connections are made that humans can’t perceive, all taking place in multidimensions. I’m attracted to this idea of statistical alchemy, that these computational spaces can be mythical and full of potential where sound material is in flux.

I was drawn to working with machine learning systems through my arts practice which explores themes of materiality and technological states of perception, and also my music making is informed by musique concréte techniques working with field recordings, found sounds, sculpture and abstraction. I was intrigued to see if these systems could output new sonic materialities through the construction of self built datasets, what these would sound like, and how I might compose with them. I’ve been exploring this for the past few years through residencies and commissions with NOVARS, University of Manchester and Cyborg Soloists, creating live AV performance work. I am now translating this work into a new spatial sound installation ‘LATENT SPACES’, this is my project with Sound and Music for the In Motion composer scheme.

LATENT SPACES by Vicky Clarke (SONAMB)

“I’m attracted to this idea of statistical alchemy, that these computational spaces can be mythical and full of potential where sound material is in flux.”

 

SONAMB

Can you speak a bit about the neural synthesis model you’re using?

I’m using a model called PRiSM SampleRNN, an early neural synthesis machine learning model that works by training on datasets of audio recordings. RNN stands for Recurrent Neural Network, the model cycles through the files over and over, to ‘learn’ from the data and make connections in latent space in order to output a ‘prediction’, a new raw audio file. Each cycle is called an epoch, so on epoch one the output samples are very noisy, as training goes on, the network ‘learns better’, so in later epochs you start to recognise materials from the input dataset, but reconfigured in an uncanny way. SampleRNN models have been around for a while now, this particular one I used in 2020, it was Dr Chris Melen of PRiSM (Practice and Research in Science and Music) at Royal Northern College of Music who updated the code, and taught me how to use it, it’s all open source and online. I built post-industrial datasets consisting of field recordings of Manchester millscapes & material eras of electricity, glass & metal.

How, if at all, does your project speak to the rapid growth of AI use and its impact on the creative industries?

For me this piece represents a point in time in the development of sonic AI. The model itself as an early example was unsophisticated, and had its own distinct sonic character with digital artefacts and what I called ‘machine wind’ which was quite ghostly and had a presence. The output samples were very lofi at 16K and noisy, so I wanted to include these characteristics within the AURA MACHINE piece, almost as though the model is an instrument in itself with its own timbre. The technology has moved on so fast in five years, that now the model and piece feels almost retro compared to present generative models. At the time I was aware of the ethical implications of bias and concerns around authorship in training datasets and so I developed my own methodologies of building concréte datasets of my own recordings, so I knew the origin of every sound, undertook all the labour and the model used relatively small amounts of local computational training at RNCM. There are always fears and hopes around emergent technologies, so it’s important to be aware of these tensions. I try to demystify, make things explainable and expose hidden systems.

We are currently living in a generative AI Hype Cycle, with daily news of automation taking our jobs, creativity & ruining the planet. AI history has ‘summer & winter’ cycles, usually around funding and research potential, when the technology makes a leap forward in some way and captures the zeitgeist. Our present cycle feels much more precarious and accelerated, defined by energy intensive deep learning models & data centres and big tech overriding artistic copyright, scraping peoples IP for large datasets. It is important that we critically engage with these technologies, challenge structures and protect our artistic IP. There are brilliant artists and initiatives addressing these concerns and lobbying for change, including Holly Herndon developing public diffusion using public domain images, the SlowAI initiatives and anti Opt-in awareness rising, for example the music industry galvanising around the ‘Make It Fair’ campaign in response to the British government’s potential changes to copyright law in relation to AI training.

Vicky Clarke (SONAMB) © Ben Williams

“For me this piece represents a point in time in the development of sonic AI.”

 

SONAMB

What’s been the most surprising thing about combining machine learning and musique concrete?

The most surprising thing has been the connections between the two disciplines. On the surface they seem very far apart, one being firmly rooted within the analogue realm of loops, tape machines and physicality, and the other being a digital domain of coding and black boxes.

One of the most interesting things is that they are both dealing with audio samples, working with sound as an abstract material. In concrete music we have the concept of the ‘sound object’, where you record a sound, loop it, take off the start and end, and it loses its context, it is just material. In the latent space of machine learning, the sounds become data and the connections that are made in training aren’t based on aesthetics. So for me I saw a link between the ‘sound object’ and the ‘sonic datapoint’, both are acousmatic, being sounds of unknown origin, they are simply material forms.

The model did produce new sounds and auras, an example of this would be one that I call ‘glass static’I can hear fragments of noise and glass from the dataset, but it is a new sound that I as a human couldn’t create and is quite beautiful. The machine did produce an aura.

Secondly going back to the lofi-ness of the SampleRNN model, and its unique breath and timbre. During composition, when I listened back to the output material for the piece, I was surprised to hear that it sounded reminiscent of early tape machine recordings with its artefacts, noises and hum – strangely a full circle back to musique concréte. For my In Motion project I am now learning tape machine techniques in order to process the machine learning material onto tape, another transmutation! Some of these experiments will form part of the spatial sound installation this October.

I can hear fragments of noise and glass from the dataset, but it is a new sound that I as a human couldn’t create and is quite beautiful. The machine did produce an aura.

 

SONAMB

Can you speak a bit about the compositional structure of the EP, what do you hope a listener will experience?

The AURA MACHINE piece is a 20 minute live recording, and takes the listener on a journey through training a neural network, conceived in three parts. It begins with the original field recordings comprising the input dataset and sound sculptural sounds, moving on to a transmutational section representing latent space, and ends with the output purely AI material. I wanted the listener to hear the changing materiality from hifi to lofi following data compression.

On the EP, the AURA MACHINE piece is track 2 and sandwiched between two examples of the original raw machine learning audio, output from the model. I wanted to share what the source material was for the piece, to show its primitive and noisy architecture. Track 1 and 3 are therefore SampleRNN outputs at early and later epochs.

AURA MACHINE EP by SONAMB (LOL EDITIONS, 2025)


LATENT SPACES is made possible by Sound and Music’s In Motion programme.

In Motion is supported by Arts Council England, Jerwood Foundation, PRS Foundation and Garrick Club Charitable Trust.

With exhibition support from FutureEverything as part of the Innovate UK-funded Cultural Accelerator programme.

Stay tuned for the premiere of LATENT SPACES in October 2025. In the meantime, you can listen to AURA MACHINE and SONAMB’s edition of The Sampler Mixtape.

Subscribe to SONAMB’s newsletter, and follow her studio developments on Instagram.


Listen SONAMB’s AURA MACHINE EP

Visit SONAMB’s Composer Profile

Read SONAMB’s In Motion Q&A

Listen to SONAMB’s Mixtape

Learn more about In Motion

Follow Vicky on Instagram

Share this page
FacebookTwitterEmailLinkedInShare