THE NESS PROJECT

j.jpg

PHYSICAL MODELLING SYNTHESIS, 2012
STEFAN BILBAO (CA)

The Next Generation Sound Synthesis (NESS) project, which ran from 2012 to 2016 at the University of Edinburgh, was a major effort devoted to extending the possibilities of synthetic sound generated on a computer. The algorithms used were all based on purely physical descriptions of musical instruments, both real and imaginary, of various types, including brass, strings and percussion. Some were emulated entirely in 3D. Because of the computational expense associated with such methods, which are essentially large-scale simulations, parallel computers were used and, in particular, graphics processing units (GPUs). Musicians were invited to Edinburgh to experiment with the sound synthesis algorithms in order to generate fully spatialised multichannel pieces of music.

BIO

Stefan is currently a Reader in the Acoustics and Audio Group at the University of Edinburgh. He studied physics at Harvard (BA, 1992), and electrical engineering at Stanford (MSc, 1996, PhD, 2001), while working at the Centre for Computer Research in Music and Acoustics, and spent two years at the Institut de Recherche et Coordination Acoustique/Musique in Paris. He was previously a lecturer at the Sonic Arts Research Centre at Queen's University Belfast, and a postdoctoral research fellow at the Stanford Space Telecommunications and Radio Science Laboratory. He is the author of two monographs and more than 100 journal and conference proceedings articles.

ness.music.ed.ac.uk

Guest User