Newsflash - this was first written in 2000 and based on a Java Applet. In 2015, I reimplemented it as a Javascript web application using the web audio API.
Run the new web audio API version! yee-king home  : run the applet

abstract  : introduction  : inspiration  : implementation  : conclustion  : references

 

User feedback and conclusion

 

PREAMBLE

 

In this final section, user feedback is presented and discussed then the conclusion draws together the themes, successes and failures and future direction of the project. Users were generally impressed by the capabilities of the system and provided constructive suggestions as to how it could be improved. The major requests were for a mating function and persistent local storage of sounds. The project achieved several initial goals, generated many more goals and will certainly be further developed.

 

 

user feedback

 

SemiConductor, two people who make abstract animation based films mention the ease with which they were able to generate complex sounds in AudioServe. They have previously come up against the problem mentioned in the discussion of Baker and Seltzers line drawing program where it becomes difficult and time consuming to build complex sounds using standard digital audio software. In general users found they could gradually develop sounds to their liking. This is a proof of the smooth fitness landscape. Most users did express a need for a way to store sounds locally once they had been evolved, and this will certainly be one of the first things to be implemented in the next version of the client software. Some users commented on the addictive qualities of the AudioServe system. A quote from one user [30] adequately illustrates this:

¦just started playing on in when i got home from werk, - still half got me shirt on an hour later.

Different people used the software in different ways. Some people spent a long time developing a single, highly fit sound. Others developed several different sounds quickly one after the other. This shows the system is flexible. Most users requested a mating facility where two sounds could be combined. This is another point that will be addressed in future versions of the software. A way to mate circuits could be to make a child genome that consisted of the genetic compliment of the two parents added together. The child would have effectively two circuits, existing in different grids, kept separate from each other or possibly with limited cross modulation. The obvious problem with this is the growth of genomes that would occur. This could lead to huge circuits that would make excessive demands on the CPU to generate their sound. A solution to this problem may lie in the analysis of where in the genome changes are being selected. If it can be shown that a part of the genome is not dormant (to recap on dormant genetic information, only one module is instantiated at any one grid reference. If two modules fall on the same grid reference only the first one whose details are read from the genome is created. This leads to unexpressed or dormant genes in the genome) and it is being changed constantly, maybe this part of the genome encodes modules that are no use to the circuit. Such parts of the genome could be deleted. Combined with other ways to trim the circuit size this system could work.

 

I showed the system to several users experienced in the use of digital music production tools. (myself included). These users were of a more technical persuasion and they were impressed by the quality and diversity of the sounds. (I could tell this in part from the faces they were pulling). They also confirmed that the system was unique. Along with the other requests above, they asked for a more descriptive circuit diagram and the ability to edit the circuit via the diagram. Phil Burk has recently announced a visual editor for JSyn circuits [31]. It could well be possible to plug that system on top of the AudioServe system to allow users to adjust circuits through direct manipulation of the modules in the circuit as well as by random mutation. They commented in a positive way on the large amount of modulation apparent in the circuits as shown in this quote from professional musician Tom Jenkinson: 

¦you can really tell that there are loads of oscillators modulating each-other " its well lush!

This is not surprising when you consider that normal synthesiser set-ups use at most 6 oscillators and AudioServe can have many more. The multiple layers of modulation available increase this effect " one module can modulate many others. The question of MIDI control of the circuits was also raised. This would allow the circuits that produce a distinct note to be run at different pitches so they could play sequenced melodies. Given the non-standard structure of the circuits (they were evolved with no consideration as to how to change their pitch), it is not clear exactly how to adjust the pitch of a circuit. Sampling the sounds from the circuit to a digital audio file such as a WAV and playing them back using a sampler would solve this problem.

 

Conclusion

 

In this project the author set out to achieve several goals. A stable, usable system, accessible to anyone via the internet that allows them to evolve interesting sounds in real time has been made. A system that is unique among software sound synthesisers in that it exploits the internet facilities available as a result of running on a multi purpose PC has been implemented. An extensible, modular platform for future development has been established. A genetic encoding scheme that can encode a wide variety of variable architecture sound synthesis circuits has been implemented. A developmental process that induces causal spread by including extra causal factors into the genotype to phenotype mapping has been used effectively. A networked, distributed population model that performs transient local hill climbs as well as hyperplane sampling has been implemented.

 

The project has failed to achieve other initial goals and it has suggested several future goals. These are as follows. Population dynamics can be investigated more fully. More detailed user selection tracking and more advanced, automatic fitness functions based on this information could be designed. Automatic evolution towards a template sound based on Fourier analysis of sounds could be implemented. Genome evolution can only occur by mutation " mating and crossover could be a fruitful area for investigation. User orientated features such as persistent local storage of genomes, visual circuit editing and digital circuit output recording could be implemented. These issues provide much motivation for further development of the AudioServe system.