Yee-King Adaptive - publications



Visit my Google Scholar Profile
  • M. Yee-King ,M. Inverno, P. Social machines for education driven by feedback , in Proceedings First International Workshop on the Multiagent Foundations of Social Computing, AAMAS-2014, Paris, France, May 6 2014,

    Download: Social machines for education driven by feedback PDF

  • M. Yee-King ,M. d'Inverno Pedagogical agents for social music learning in Crowd-based Socio-Cognitive , in Proceedings First International Workshop on the Multiagent Foundations of Social Computing, AAMAS-2014, Paris, France May 6 2014,

    Download:Pedagogical agents for social music learning in Crowd-based Socio-Cognitive PDF

  • M. Yee-King, M. Krivenski, H. Brenton, A. Grimalt-Reynes, M. d’Inverno. Designing educational social machines for effective feedback. 8th International Conference on e-learning. Lisbon, Portugal, 15-18 July, 2014.

    Download: Designing educational social machines for effective feedback PDF

  • Matthew Yee-King, Roberto Confalonieri, Dave De Jonge, Katina Hazelden, Carles Sierra, Mark d'Inverno, Leila Amgoud, Nardine Osman `Multiuser museum interactives for shared cultural experiences: an agent-based approach' Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems

    ACM digital library link

    Download: Multiuser museum interactives for shared cultural experiences: an agent-based approach PDF

  • Livecoding for SuperCollider and live alto flute: Matthew Yee-King and Finn Peters

    Computer Music Journal DVD, Winter 2011, Vol. 35, No. 4, Pages 119-137
  • An autonomous timbre matching improviser Matthew John Yee-King, ICMC 2011

    See it online
  • Progress Report on the EAVI BCI Toolkit for Music: Musical Applications of Algorithms for use with consumer brain computer interfaces Mick Grierson, Chris Kiefer, Matthew Yee-King, ICMC 2011

  • A Comparison of Parametric Optimization Techniques for Musical Instrument Tone Matching

    Yee-King, Matthew, Roth; Martin, 130th AES convention, 2011

    Parametric optimisation techniques are compared in their abilities to elicit parameter settings for sound synthesis algorithms which cause them to emit sounds as similar as possible to target sounds. A hill climber, a genetic algorithm, a neural net and a data driven approach are compared. The error metric used is the Euclidean distance in MFCC feature space. This metric is justified on the basis of its success in previous work. The genetic algorithm offers the best results with the FM and subtractive test synthesizers but the hill climber and data driven approach also offer strong performance. The concept of sound synthesis error surfaces, allowing the detailed description of sound synthesis space, is introduced. The error surface for an FM synthesizer is described and suggestions are made as to the resolution required to effectively represent these surfaces. This information is used to inform future plans for algorithm improvements.

    Download: Download the thesis chapter this paper was a short version of... A Comparison of Parametric Optimization Techniques for Musical Instrument Tone Matching PDF

    AES library link


    Download by clicking the title above...

    Matthew Yee-King, Martin Roth, International Computer Music Conference 2008.

    This work presents a software synthesizer programmer, SynthBot, which is able to automatically find the settings necessary to produce a sound similar to a given target. As modern synthesizers become more capable and the underlying synthesis architectures more obscure, the task of programming them to produce a desired sound becomes more time consuming and complex. SynthBot is presented as an automated solution to this problem. A stochastic search algorithm, in this case a genetic algorithm, is used to find the parameters which produce the most similar sound to the target. Similarity is measured by the sum squared error between the Mel Frequency Cepstrum Coefficients (MFCCs) of the target and candidate sounds. The system is evaluated technically to establish its ability to effectively search the space of possible parameter settings. A pilot study is then described where musicians compete with SynthBot to see who is the most competent synthesizer programmer, where each competitor rates the other using their own metrics of sound similarity. The outcome of these tests suggest that the system is an effective "composer's assistant".

  • The Evolving Drum Machine.

    Click the title to download.

    Matthew Yee-King. MusiCAL workshop, 9th European Conference on Artificial Life, September 2007.

    Evolving through a series of target kits, namely TR606 - TR707 - TR808 - TR909.


    The expectation of the listener from house and techno music seems to be that percussion sounds will maintain the same timbre for the duration of a piece of music. For the composers of such musics the synthesizing of drum sounds of a quality equal to those available from commercial drum machines or samples is difficult and seems unnecessary. A system is presented here which provides a unique method for the composition of rhythmic patterns with dynamic timbres. A genetic algorithm using a heterogeneous island population model is applied to the problem of percussion sound synthesizer design. Multiple percussion sounds are evolved simultaneously towards different targets where the targets are audio files specified by the user. The fitness function driving the evolution compares the evolving sounds to the target sounds in the frequency domain, awarding higher scores for closer matches. The system was tested using a simple step sequencer interface, as found in classic drum machines and a MIDI controlled version has also been implemented. The system provides the user (and listener) with a tangible sense of timbral transformation as the performance proceeds, where the timbres move ever closer to the target sounds. This represents an effective application of an artificial life technique to real time, algorithmically enhanced music composition.

  • An Automated Music Improviser Using a Genetic Algorithm Driven Synthesis Engine

    Matthew Yee-King. Presented at the EvoMusart workshop, evo* 2007, published in Applications of Evolutionary Computing, volume 4448 of Lecture Notes in Computer Science (LNCS 4448), pages 567577. Springer, April 2007

    Genetic algorithm sound synthesizer sound example (featuring Finn Peters on sax, Tom Skinner on drums and CPU+my algorithm on synth sounds).

    The improviser on its own, playing against an Eric Dolphy solo


    This paper describes an automated computer improviser which attempts to follow and improvise against the frequencies and timbres found in an incoming audio stream. The improviser is controlled by an ever changing set of sequences which are generated by analysing the incoming audio stream (which may be a feed from a live musician) for its physical and musical properties such as pitch and amplitude. Control data from these sequences is passed to the synthesis engine where it is used to configure sonic events. These sonic events are generated using sound synthesis algorithms designed by an unsupervised genetic algorithm where the fitness function compares snapshots of the incoming audio to snapshots of the audio output of the evolving synthesizers in the spectral domain in order to drive the population to match the incoming sounds. The sound generating performance system and sound designing evolutionary system operate in real time in parallel to produce an interactive stream of synthesised sound. An overview of related systems is provided, this system is described then some preliminary results are presented.

  • Virtual and Physical Interfaces for Collaborative Evolution of Sound

    Sam Woolf and Matthew Yee-King, Contemporary Music Review, Volume 22, Number 3 / September 2003


    Interactive evolution with genetic algorithms can be used to facilitate the rapid development of interesting sonic forms. This paper describes two rather different and innovative systems that allow multiple users to evolve sound collaboratively. The Sound Gallery was conceived as an interactive installation artwork where the movements of a group of physically present participants are tracked over time and influence the evolution of sound-modifying hardware circuits. AudioServe is a tool that allows visitors to a web-based interface to evolve sounds by mutating virtual frequency and amplitude (FM/AM) modulation circuits left on the server by previous users. The two projects eventually became linked when the physical interface system designed for the Sound Gallery was connected to an adapted version of the audio-synthesis engine built for AudioServe. The two systems are described, the techniques used to create them are explained and some of the issues involved in collaborative sound evolution are discussed.


Yee-King Adaptive - publications