Newsflash - this was first written in 2000 and based on a Java Applet. In 2015, I reimplemented it as a Javascript web application using the web audio API.
Run the new web audio API version! yee-king home  : run the applet

abstract  : introduction  : inspiration  : implementation  : conclustion  : references

 

Introduction

 

PREAMBLE

 

AudioServe is a java applet that allows its users to evolve populations of interesting and novel sounds in a distributed, collaborative fashion. It represents the fusion and exploration of several major themes of evolutionary and adaptive systems and other areas of interest to the author delivered in a palatable, functional form. In this introduction I give a brief discussion of what these themes are and how they relate to AudioServe. Most of these topics are revisited later in the text.

 

Motivation

 

I am interested in the more technical and abstract side of modern music production. One of the names given to an AudioServe sound submitted by a user to the central server is rather apt:

Steve Reich with earth hum

The system generates complex and intricately modulated sounds. They vary from gentle drifting soundscapes to discomforting, aggressive sounds verging on white noise. The sounds are not explicitly melodic but some of them do have apparent melodies. Such sounds may be of use as part of an electronic music composition, as sound effects for animations or as soundtracks for abstract films.

 

I also have an interest in Evolutionary and Adaptive Systems research and this project has allowed me to investigate important themes of such research. It implements evolutionary and adaptive systems techniques (stated below) with an explicit awareness of what modern music making entails as well as what kind of sounds might appeal to the open ear. The user feedback suggests that people found the program easy to use (once they had read the instructions) and useful.

 

I take an engineering perspective and like to see working systems rather than unsubstantiated theories. Therefore the emphasis of this writing is the nature of the system, the work that inspired and how it affects those who use it.

 

I do intend to continue development of the ideas such as to build up a series of pieces of software.

 

evolutionary and adaptive systems

 

AudioServe generates sounds using complex modular circuits. The design of such circuits is a non-trivial problem that lends itself to the genetic algorithm class of search methods. This program uses heuristics commonly employed in evolutionary and adaptive systems research to generate novel circuit designs. These heuristics are discussed below.

 

>>application of a-life concepts to engineering problems

 

Genetically encoding a problem to make it evolvable such that a complex solution space can be explored using stochastic/ reinforcement learning heuristics inspired by nature.

 

>>evolution by gradual alteration of genotype

 

The virtual circuits that are used to make the sounds are encoded in a genome, a blueprint that can be translated into a circuit. The genome is manipulated through mutation such as to produce gradual changes in the sound the circuit it encodes makes. The genetic encoding scheme has been designed to be robust in the face of mutation and to allow this gradual change by mutation. This allows the evolving circuits to travel around a smooth fitness landscape as opposed to a Manhattan skyline shaped one.

 

>>distributed, collaborative evolution

 

Each user of the applet evolves their own local population of sounds. Their  computer acts as a node in a distributed network of populations. This means the system can evolve many strains of circuit in parallel. The distributed populations communicate via a web server. The web server receives <user> selected sounds from the nodes and distributes them to other nodes in a random fashion. Hence users can pick up strains of sounds from other users and introduce them to their local populations. This makes the program a lot more exciting to use than a non-collaborative version. It also gives the evolutionary search a hyperplane sampling as opposed to a hill-climbing feel (see inspiration section).

 

>>user judgement based selection criteria

 

The sounds are chosen for further evolution based on the preferences of the user. The user examines each generation of the population and chooses the fittest sound for mutation. The next generation is formed entirely from mutants of this chosen sound. This is akin to reinforcement learning in that the system is not told exactly what features of a circuit are good or bad but merely that the overall phenotype of the circuit is good or bad. The user can also choose to submit especially fit genomes to the server so they are available to the other nodes.

 

>>developmental process

 

The genome is translated into a variable architecture circuit via a developmental process. Part of the motivation for including this developmental process was to introduce extra complexity into the system such as to induce causal spread. As discussed here [1] the embracing of causal spread in evolutionary systems maximises their creative potential. Causal spread is discussed further in the inspiration section.

 

audio synthesis

 

>>DSP

 

Ever since we left the dark ages and were able to generate sound using digital synthesisers, digital signal processing or DSP has been at the core of modern sound making. In the context of audio it refers to the translation of audio into a digital signal that can be manipulated using algorithms. In the AudioServe system, modular DSP circuits are evolved that generate and manipulate complex waveforms in the digital realm.

 

>>modular FM/ AM synthesis

 

Frequency Modulation Synthesis is a popular form of audio synthesis. [2]. FM synthesis is based on principles of Fourier curve fitting where many simple equations are combined to generate a complex graph [3]. To make sound, the simple equations are exchanged for audio oscillators that produce simple repetitive waveforms and the combination of these waveforms is achieved by connecting the oscillators together. Frequency modulation is achieved by connecting the wave output of an oscillator to the frequency control of another. To get amplitude modulation the wave output is connected to the amplitude control. In a standard FM set up, such as the 80s synth the Yamaha DX7, the signals from several sine wave oscillators are combined and eventually the signal is passed through a state variable filter. This filter makes it possible to colour the sound by removing and enhancing different frequencies. An example of how to use this model to make a sound is given here by Phil Burk as he discusses the Jsyn libraries I used to program this software [4]:

JSyn is based on the traditional model of unit generators which can be connected together to form complex sounds. For example, you could create a wind sound by connecting a white noise generator to a low pass filter that is modulated by a random contour generator.

java programming

 

>>object orientated programs are easier to debug and more modular

 

Java programming is done best with a strong awareness of the Object-Orientated (OO) paradigm. As I see it, an OO program consists of a set of objects that interact mainly through interfaces. In this sense objects generally do not access data fields in other objects directly, rather they do it via public methods. This leads to a sense of discrete objects and to code that is easier to debug as each object is responsible for dealing with its own data.

 

OO programs are also more modular since a new version of a module can be implemented so long as it provides the right interfaces to the objects that interact with it. In the context of an a-life type program this means different sections such as the genome encoding or the developmental process can be swapped out for variants. This makes such programs extensible.

 

>>internet aware software

 

Java is a network aware language. In the context of this program, this has allowed me to implement a distributed, collaborative evolution process as discussed above. The facility for java programs to run embedded in web pages in the form of applets means I have made this program accessible to the widest possible audience " the internet community.

 

creative tools

 

>>the rise of the software synthesiser

 

Stand-alone digital synthesisers rely on expensive, custom DSP chips to generate audio. These chips are optimised to run DSP algorithms. Increases in home PC CPU power have made it possible to generate and process audio using software implementations of DSP algorithms. This has lead to a plethora of virtual synthesisers, e.g Native Instruments Reaktor series [5] and Steinbergs Rebirth [6]. These tools allow your PC to act as a musical instrument, generating audio in real time. Peoples pre-disposition to vintage analogue synth type sounds has biased such tools towards these kind of sounds. Rebirth is marketed as a simulation of a synth and two drum machines, all of which have been out of production for 10+ years! The more flexible of these tools, such as Reaktor do allow you to synthesise sounds in a variety of ways however. The problem is that the greater the flexibility of a piece of software, the more complex it is to use. Often setting up and using a piece of software takes so much effort that the user feels very uncreative by the end of it. AudioServe is easy to use and can generate a wide variety of unique sounds.

 

>>inspiration through randomness

 

I believe a great source of creativity is randomness. AudioServe attempts to fully embrace this principle by allowing the user to make a quality judgement over a series of random events. All the novelty generated in the system comes from randomness. The user merely directs creative motion by qualifying one randomness over the other.

 

>>why raw audio and not melodies?

 

AudioServe does not aim to generate music. It aims to generate sound. Various attempts have been made to generate music algorithmically (e.g. see [7.8.9]). Some have successfully simulated how human players or composers of reasonable competence might play or compose music. AudioServe is different as it aims to generate interesting sounds that can stand in their own right or be used as part of a composition or soundtrack. Some of the sounds it generates could be loosely defined as being melodic. Some of them are certainly percussive and rhythmic. It is up to the user to decide what criteria they use to judge the sounds. It is also up to them to use the sounds creatively.

 

>>group collaboration

 

As a solo producer of electronic music and a reasonably accomplished improvisational jazz drummer, I have good experience of different approaches to making music. Group collaboration is often the most immediately fulfilling way to make music, when people in the group feed off each others ideas. AudioServe attempts to cash in on some of this effect by encouraging just such an exchange of ideas, in a pleasingly abstract sense. People exchange ideas as to what they think is a good sound and these ideas can persist and be developed further by other people who download and further evolve that sound. In a certain sense it is an interactive art installation.

 

>>a different kind of software

 

As mentioned above, real time audio synthesis on the home PC is a relatively new thing. The software being developed often mimics an instrument that is already available as a separate unit. AudioServe, as a network aware, internet embedded system is unique from other software sound synthesisers. It embraces these facilities, available as a result of the software running on a PC not in a stand-alone synthesiser.

 

Structure and style of this thesis

 

Due to the wide range of topics covered in this thesis, issues raised in the text tend to be discussed there and then since the reader is already focused on that topic. If this is not the case the reader is redirected to the appropriate section. This means that aside from user feedback, the final discussion section is mostly a summary of these issues.

 

The text is organised into four main chapters including this one. The implementation chapter explains the details of the project. The inspiration chapter explores related work and connects it to this project. The discussion chapter draws together points raised in the other chapters, comments on the user feedback and includes concluding remarks. Other sections are the bibliography, the glossary and the appendices.