Newsflash - this was first written in 2000 and based on a Java Applet. In 2015, I reimplemented it as a Javascript web application using the web audio API.
Run the new web audio API version!

yee-king home  : run the applet

abstract  : introduction  : inspiration  : implementation  : conclustion  : references






As mentioned in the introduction, this project draws on a wide variety of influences. Fundamentally based on Dawkins Biomorphs, it also draws ideas from Whitleys Island Model GA[16], Fourier curve fitting and Husbands et als [13] Gasnets. In this respect the networked populationn system is inspired by the Island Model GA; the genetic encoding is inspired by the GasNets; the synthesis engine is inspired by FM synthesis.


The blind watchmaker


Dawkins originally coined the phrase The Blind Watch Maker [17] in response to Paleys famous watchmaker argument [18] Paley states:

"In crossing a heath, suppose I pitched my foot against a stone, and were asked how the stone came to be there; I might possibly answer, that, for anything I knew to the contrary, it had lain there forever: nor would it perhaps be very easy to show the absurdity of this answer. But suppose I had found a watch upon the ground, and it should be inquired how the watch happened to be in that place; I should hardly think of the answer I had before given, that for anything I knew, the watch might have always been there."

In chapter 3 of his book, Dawkins explains how small changes can accumulate to generate complex forms:

We have seen that living things are too improbable and too beautifully 'designed' to have come into existence by chance. How, then, did they come into existence? The answer, Darwin's answer, is by gradual, step- by-step transformations from simple beginnings, from primordial entities sufficiently simple to have come into existence by chance. Each successive change in the gradual evolutionary process was simple enough, relative to its predecessor, to have arisen by chance. But the whole sequence of cumulative steps constitutes anything but a chance process, when you consider the complexity of the final end-product relative to the original starting point. The cumulative process is directed by nonrandom survival.

This argument is of course fundamental to the application of evolutionary theory to engineering problems. To further support the argument, Dawkins implemented a computer program called Biomorph (on line java version available [19]. In his second Sonomorphs paper [14] Nelson describes the Biomorph system:

Each biomorph is a small graphic image drawn by a recursive subdivision algorithm that is driven by a numeric vector. Dawkins likens this vector to a genetic code. In each new generation, biomorphs are bred by causing small random mutations in the vector. A series of children are born from which he selects the parent of the next generation based on subjective visual criteria.

Using Biomorph it is possible to evolve complex naturalistic forms in just a few generations. Fig 8 shows some of these forms.


human preference based fitness functions


>>division of labour


Dawkins Biomorph program was originally implemented as an illustration of the power of incremental evolution. The AudioServe program, along with its conceptual stable mates under discussion here, focuses more on the procedure of human based selection among artificially generated random variation used in Biomorph. I have previously argued that artificial systems are highly effective at generating novelty and humans are highly effective at making quality judgements on this novelty [20]. Indeed, in a system that allows witnesses to evolve a facial composite of a criminal, Caldwell and Johnston [21] talk of an effective division of labour between the human who judges the likeness of the face and the computer that gradually alters the face. It seems that given a reasonably small selection of gradually changing phenotypes, human selection can be used effectively to evolve a fit population or individual.


>>evolving visual forms


Several researchers have implemented human selection systems to evolve visual forms. A good summary is given here [22]. In this text I provide a few examples and use them as a platform to explain the features of AudioServe.


Interactive evolution of line drawings:

In [15] Baker and Seltzer coin the phrase interactive evolution. They built an interesting system that generates line drawings. The genome is a highly representational one " it encodes a set of brush strokes. Parameters defining a brush stroke include symmetry type, point connection type, stroke type and so on. Representational genomes which directly encode features of the phentype have the advantage that crossover and mutation has a transparent effect on this phenotype. The user mutates and breeds images until they get something that satisfies them. It is obvious how crossover in such a system could yield interesting results " the phenotypic effect of such an operation would be to combine the strokes from two images. The genome in AudioServe is representational in the sense that each gene encodes the details of a module. The difference is that AudioServes developmental process adds attenuation to the parameters in the genome that does not occur in Baker and Seltzers system.


The system also had a seed mode where the user could provide a line drawing and this would be translated into a genome that produces that phenotype (It is not clear how this translation was achieved). Seeding the system with Brennans average face (a simple face generated from statistical analysis of many real faces), the researchers were able to generate a wealth of interesting faces varying from simple line drawings to what look like detailed charcoal drawings. Good drawings could be saved, building up a browsable library of seed images. The provision of a library of ready evolved phenotypes is implemented, after a fashion, in AudioServe. The difference is that AudioServes library is made available to the user in the form of randomly selected immigrants rather than as a complete archive.


Baker and the Seltzers discuss how traditional computer art software can become impossibly hard to use beyond a certain level:

Traditional tools use a compact, object-oriented drawing representation that makes it easy for a user to apply many operations to the drawing. Unlike a bit-mapped image, this high-level representation allows the user easy manipulation of individual drawing features, such as the ability to delete or modify individual lines, points, or other objects. However, creation of drawings in this format requires very good eye-hand coordination, and, for anything even slightly complex, a great deal of effort and tedium. Using these tools to create drawings beyond a certain level of complexity can be all but impossible.

In other words it can take a prohibitively large amount of effort to create highly complex images using such software. In answer to this they discuss why interactive evolutionary systems may well become a new tool in the computer-assisted artists box:

The work presented here is intended to demonstrate the potential for using interactive evolution to augment and enhance the power of traditional computer-aided drawing tools and to expand the repertoire of the computer-assisted artist.

This is also relevant to sound " as computer music tools allow the user to create increasingly complex sounds, the length of time required to create such sounds increases. The AudioServe system can generate complex sounds in just a few generations of evolution. The difficulty of crafting original, abstract sounds using other digital audio compared to the ease with which this is done in AudioServe was mentioned by one of the users. (see user feedback section)


Artificial Painter " evolving patterns using a variety of fitness functions:

Hautop et al [23] implemented a system called Artificial Painter that used human and automated fitness functions, the latter both fixed and evolvable, to evolve neural net activity patterns which were presented as works of visual art. They discuss the features of fitness landscapes in systems with human preference and automatic or fixed fitness functions. They state that the landscape is static with a fixed fitness function and dynamic with a human selection scheme, since a user may adjust their selection criteria based on the novelty that is presented to them or as a result of other influences. In AudioServe I found that it is possible to work either way " you can evolve sounds with a set idea of what you are after (e.g. a gently modulated bass sound) or you can just let the program guide your choice. The Artificial Painter genome is a bit string one, obscure in that it encodes both the neural net sensory processor and the environment the net is placed in. Different nets and different environments generate different net activity patterns. This research is interesting as it brings up several issues that the AudioServe system could be used to explore further such as evolvable and human-automatic hybrid fitness functions. Rather than ending with a comparison of the different selection schemes that initially seemed to be the focus of the work, the researchers conclude by make-shifting links between their system and long-term human creative processes. 


Sims - evolving algorithms that generate patterns:

In the classic example of such systems, Sims [24] implemented an interactive evolution process to generate variations of algorithms known to produce colourful images. The user chooses their favourites from the offered selection of patterns. The genome encodes parameters that are normalised to fit the task they must perform at their locus. This kind of number string encoding normalisable parameters genome is related to the one used in AudioServe. The AudioServe genome is described further in the implementation section.


>>evolving sonic forms


There have been fewer attempts to implement systems that allow the user to evolve sonic forms. In his Sonomorphs paper [25] Nelson suggests why this may be so:

Music presents a difficulty that we do not find in Dawkins' model. With The Blind Watchmaker program we can compare and evaluate an entire generation with a single glance. If all of the sonomorphs in a population were presented at once the cacophony would make it impossible to distinguish let alone choose the stronger individuals. Presenting sonomorphs sequentially taxes the memory even in populations as small as the one described above.

Sonomorphs - evolving musical scores:

In [25] Nelson reported his Sonomorphs program, which employed a human selection based Genetic Algorithm to evolve musical scores. The user selects their favourites from a small population (9, 16 or 25 members) of scores for breeding, based on visual or auditory preferences. The members of the population are represented as piano rolls where the notes are in a grid with pitch on the x-axis and time on the y-axis. The next generation is made from the offspring of the selected parents. The genome is a bit string one that directly describes the on-off state of positions in the piano roll grid. Nelson evolved interesting melodies and rhythmic patterns in this way. In a more recent report

[14] Nelson describes the use of various more complex genetic representations of the scores, where the gird is generated from L-Systems, Cellular Automata, chaos algorithms and others. The sound output of all these systems is generated by MIDI instruments. The population model in this system is similar to the local part of the AudioServe model. As a result it loses out on some of the interesting features of AudioServes model, mentioned in the causal spread section.


The Sound Gallery " evolving DSP circuits:

In a system related to AudioServe both in spirit and in the kind of sounds it generated, Woolf evolved reconfigurable hardware DSP circuits that were used to filter a sound source [29]. The system was implemented in the form of an interactive art exhibit with four corner placed speakers, each emitting a filtered version of the sound source. Four circuits were evolved in parallel, in real time such that the filtering altered according to the activities of the visitors to the exhibit. The circuits serving the speakers that received the most attention were allowed to proliferate in the population. The GA employed an island population model, after [16] This allowed separate populations of circuits to evolve for each speaker on islands, with some transfer of data from island to island. This kind of scheme is related to the one used in AudioServe where each island or node is a client program. The difference is that the transfer of data from node to node is mediated by the server program. This difference allows the monitoring and storage of the data being transferred. The genetic encoding in The Sound Gallery was very simple, dictating the transfer functions for each cell in the circuit.


Audiomorph - evolving wiring systems for modular FM/ AM circuits:

Audiomorph [12] is a java applet that allows the user to evolve wiring systems for fixed architecture modular FM/ AM circuits made from sine wave oscillators. The population scheme in this program is similar to the local part of the population scheme in AudioServe in that a mutant is selected from a one generation to seed the entire next generation. The genome encodes a set of wires between different modules in the circuit. Modulation patterns are built up that can generate complex waveforms. This program paved the way for the AudioServe system by proving the capabilities of the JSyn libraries to deal with complex circuits and by demonstrating the variety of sounds that could be generated by modular FM/ AM circuits.


Causal spread and hyperplane sampling


To put the other features of the system in context, namely the genetic encoding, the developmental process and the distributed population model, I will introduce the concept of causal spread explored in previous work by the author [1] as well as discussing the features that make a good GA.  Causal spread is effectively summarised by Wheeler and Clark in [26]:

We shall use the term causal spread to describe any situation in which an outcome is generated by a network of multiple, co contributing causal factors that extends across readily identifiable systemic boundaries.

The conclusion in [1] was that embracing causal spread enhances the potential of adaptive systems through introduction of appropriate complexity into a system.


>>genetic encoding


Research by Gruau et al into neural net controllers [27] suggests that fixed architecture networks are less capable than variable architecture networks, that they take more human intervention and that they tend to use more resources to solve problems (a variable size network can shrink to an optimal size whereas the researcher must decide through trial and error what size to make a fixed architecture). The genetic encoding scheme in AudioServe is directly inspired by that used by Husbands et al in [13]. This genome consists of a set of genes representing parameters for nodes in a variable architecture dynamic recurrent neural network. The integer string genome has a variable length for different numbers of nodes and each integer is normalised as appropriate for the parameter it encodes. AudioServe also has a variable length integer string genome where genes represent modules in a circuit. It too has a variable length but this feature has not yet been exploited. Variable architecture networks seem to embrace causal spread by introducing complexity into a system that adds to its creative potential.


>>developmental processes


The research of Gruau et al [27] suggests that using a developmental process to mediate the genotype to phenotype mapping is a good way to implement variable architecture networks and thus to increase causal spread. AudioServe implements a variable architecture network and a developmental process that makes its genotype to phenotype mapping indirect. In the GasNets genome [13] the nodes are instantiated all at once and they then connect to each other according to their genetic parameters. AudioServes modules are instantiated and form their connections sequentially so one node is biased to have more connections to it than the others. As mentioned in the implementation chapter this is the ideal situation when the phenotype is the output pattern of single node. Previous work with a fixed circuit architecture succeeded in generating varied sounds but it always seemed that the sounds were obviously made up from sine waves, the modules used in the circuit. Using a variable architecture for AudioServes circuits is in part inspired by a desire to generate sounds whose waveforms deny their roots in the combination of simpler component waveforms.


>>distributed evolution and hyperplane sampling


In his Genetic Algorithm Tutorial [28] Whitley cites Hollands argument that shows a good GA effectively searches a space by sampling hyperplanes of that space. Whitley sees a bad GA as being a hill climber exploring a localised region rather than a hyperplane sampler exploring varied regions of the search space in parallel.  He suggests that the features making a good GA include the population size and the method by which unfit genomes are replaced in the population. Introducing complexity to the system through these features could be a good way to induce causal spread. Whitley puts forward the Island Model GA where evolution occurs on several islands, with controlled levels of migration between islands. This model promotes the parallel essence of hyperplane sampling. In AudioServe the client programs act as transient islands which evolve several fit individuals then expire. The islands are exploring the search space of possible circuits in very varied ways " the searches are driven by the preferences of the users. Since the populations on the islands are transient, the population on the server is the one that really undergoes long term evolution. This leads to a search scheme that has a healthy hyperplane sampling feel. Proof of this is the rich variety of sounds stored on the server. The AudioServe system has not yet been running long enough to comment on the problem of convergence, as reported in [29]. It is hoped that the desire people have to be unique will maintain the flow of novelty into the system. AudioServe does not yet implement a method for removing genomes from the persistent server population. Perhaps an effective way to do this would be to monitor the fates of genomes sent out from the server to the client. Genomes that tend to get picked up by the user and evolved further would be labelled as fit and allowed to persist in the server population. Genomes that get sent out and largely ignored could be removed from the server population. The tracking could even be done in a more implicit way " statistical analysis of the genomes in the server might reveal related strains of genome, i.e. genomes that have been sent out, evolved and sent back again. Such genomes would be given a high fitness score.