AVATAR Machine Learning Improvising Software

December 14th, 2020

Screen Shot 2020-12-13 at 4.02.19 AMAt IUPUI’s Tavel Lab, Jason Palamara and I have been collaborating on the production of self-driving musical software that listens to a vibraphone player and improvises along side. This performance shows a performance that was featured at the Summer Intensive in Contemporary Performance Practice (SICPP), July 31, 2020. The Avatar program is a machine-learning-enabled “choice engine” which provides a dynamically sensitive duet while listening to live vibraphone performances. The initial version is geared for use with a vibraphone, with additional instruments soon to follow. Using this system, the musician performs improvisations on the vibraphone while the software listens, closely following the vibraphone performance. The package employs a Markov-chain model culled from Scott Deal’s improvisations. This mindfile database allows the software to generate novel content based on Scott Deal’s style. While the Markov transition database provides note-to-note transitions, the AvatarPlayer makes use of this data in several ways. Throughout a performance, the AvatarPlayer cycles through five playback behaviors (favor repetition, favor novelty, favor four notes, favor chords, and favor phrases), all of which make use of the database differently.
Music by Scott Deal, Software Design by Jason Palamara

 

 

Goldstream Variations Solo Version, vibraphone, machine learning, and electronics

December 12th, 2020

Screen Shot 2020-12-13 at 10.01.07 PMComposed and performed by Scott Deal, Goldstream Variations (2012) creates an interconnected system through live music, electronics, and machine learning algorithms. The variations are scored for one to seven musicians on undetermined acoustic instruments, together with electronic/computer artists. The selection of this grouping shapes the aural nature of performance space through the arrangement of performers and loudspeakers. Each page of the score constitutes one variation which is performed in heterophonic fashion as an ensemble. The acoustic musician’s performances are engaged by various computer artists. The variations are designed for performance in either a single physical space, or distributed telematically between multiple sites on high-bandwidth Internet. Machine learning is incorporated into the design of the work via the *ml application developed by Ben Smith. This software is “trained” with a library of motifs from the score. View this performance.