A collaboration between urbanSTEW and ASU Physics Professor Dr. Vaiana, the purpose of this project is to create a multifaceted artwork that will bring research about intrinsically disordered proteins to life. Makezine just recently featured our work in progress, read more here: http://makezine.com/magazine/the-amyloid-project/
My main contribution to the project is the audio design and music composition. The audio exists in two different works. The main work is the installation which is pictured above. The secondary work is a piece for dance, visuals and sound.
IDP 1 audio from urbanSTEW on Vimeo.
The installation:
The sound design of the interactive sculpture has two different worlds, one is ‘positive’ and one is ‘negative’. A positive sound is largely musical with some elements of user modulation. Moreover, the more ‘positive’ connections made throughout the sculpture (in this case people grabbing the right combination of red or blue bars) will reveal a generative music score. A ‘negative’ connection reveals mechanical sounds of various nature. Neither modes of interaction are preferred over the other rather the interaction encourages connections across the structure to be made.
Technical information: The structure uses pressure sensors under each bar that are in series with a pager motor that makes the bar vibrate when squeezed. This data is sent to an Arduino board, parsed then sent to the main computer. The audio is designed using Pure Data, all sounds are synthesized from scratch. The musical score is created via a Markov table that sets the probability of a note or rhythm occurring before or after another note.
NPR story that features the audio.
Amyloid Structure Build from urbanSTEW on Vimeo.
The performance work:
The design for the performance work is taken from similar tools I designed for the installation, through used in entirely different ways. For this piece I used custom software modules developed for Pure Data called “externals”. These function much like plug-ins. These externals create harmonic sequences and rhythmic sequences using a very specific process based on social interaction. For example, a pitch has friends. Some of these friends it spends more time with than the others. Get four friends together you have a group (or in music a chord), depending on who is in the group the social dynamic may change. Sometimes this group may just get some coffee and talk about current events, other times they may go streaking across the quad. It all depends on who is in the group. Each chord is built based on this group dynamic. Depending on how the software is set, the harmonic result will be either very pleasant or could be a little “wild”.
The rhythmic portion is similar, where is creates sequences that follow these same principals, however, instead a pitches it uses rhythmic values. A rhythm or beat has different qualities such as syncopation, strong beat vs. weak beat, density, length, tempo, etc. All of these are considered when choosing “friends” for another note that will occur later in time.
Finally, the sound design is taken partially from elements of the research used to construct the work. Since much of Dr. Vaiana’s work is based on how certain proteins move, much of the sound involves movement from one state to another. For example, a single sound event could be a click, a tone or sample. That event may occur again at a random point or at a regular interval. It may choose to join other events as it sees them “getting along” together to form denser textures or disappear all together. The sounds range from noise-based to pitch-based and also from organized to unorganized.
Much more development in connecting the structure and sound design with the research and other data is taking place and will be implemented in future versions. The custom software used was programmed in C++ and will soon be made open source.