Yee-King Adaptive - MSc Thesis



AudioServe is a system based on principles used in evolutionary and adaptive systems research that allows users to evolve interesting and novel sounds in real time. The sounds are of the abstract variety ? they are highly modulated waveforms varying from gentle soundscapes to aggressive rhythmic noises. The system employs a GA with a distributed population model where transient, small local island populations evolve and share sounds via a persistent central population. The sounds are generated from interconnected modules in virtual frequency and amplitude modulation (FM/AM) circuits. FM/AM circuits generate complex waveforms by combining several simpler waveforms. Explicitly, one simple waveform can be used to modulate the frequency or amplitude of another simple waveform to produce a more complex output. The variable size genomes encode parameters for modules in these variable architecture circuits such as their connection forming behaviour and what type of module they are. The circuits evolve by mutation of their genomes so the sounds the circuits make are heard to gradually change through progressive generations. The system is presented to the user as a Java applet embedded in a web page. The distributed evolution is achieved by means of a web accessible database server that receives, stores and transmits sounds to the Java applet client programs. Steps have been taken to ensure a smooth fitness landscape where the user feels they have some control over the evolution of their population. The genetic encoding is robust in that it always decodes to a valid circuit; the mutation function has an implicit conservative dynamic where locus values are unlikely to oscillate from one extreme to another, rather they change gradually; the user can control the mutation rate so they decide how far to travel in the search space with each round of mutations.

Full text

Source code


Yee-King Adaptive - MSc Thesis