Final Exhibition/Presentational Scenarios
Final Exhibition of Building Synthetic Agency and the future scenarios of the Neuroprosthetic Research System will consist of a series of videos that are edited down from durational experiences, highlighting different levels of engagement, mechanical acceptance and transformative moments with each device. While to date only a handful of scenarios have been designed, the current robustness of the system allows for rapid design and deployment of additional scenarios.
Language of Praise
Building Synthetic Agency is an entry point into a limitless poetic space of human agency amplified by technologies both material and immaterial. It functions as one node in the larger Neuroprosthetic Research System, which will be built out from the core functionality developed for this piece.
The major influence on this body of work are the early investigations of the body into space by Rebecca Horn, notably her various fan pieces that ranged in scale from head sized to body sized. These decidedly material engagements sit in contrast to the influence of Stelarc’s technologically mediated yet similarly expanded body.
Janine Antoni’s Slumber also serves as a model for long term performance installation work that involves EEG data, however in contrast this work uses that data as a control mechanism. For this we turn to pioneers in BCI artworks which emerge from audio proejcts like Alvin Lucier’s 1965 Music for Solo Performer, in which alpha waves triggered vibrations in percussion instuments through to interactive works like Mariko Mori’s Brainwave UFO.
Late in the quarter I was given access to synthetic voice built from my own which transitioned the focus of my project from physical and haptic investigations of expanding the body schema into a text and speech based exploration of the system. Created using the Festival Speech Synthesis System, the voice was built from hours of audio recorded by Nicolás Varchausky last spring. An example of the voice can be heard here.
Building on my Brain-Computer Interface / Open Sound Control system I had developing for the Neuroprosthetic Research System I created new code to expand the system and dynamically generate Text To Speech (TTS) based on my own brainwave activity. A diagram of the work in progress can be seen here.
During the first half of the quarter my primary programming issue was to decide in which environment to build the final project. I had been working in parallel with many strategies on my last few projects: Processing & Arduino, Supercollider with SCPyduino, Supercollider with raw serial, etc. I had partial prototypes in each and had used different configurations for each of my previous projects. As many are aware using SCPyduino/Firmata requires Arduino 16, which cannot be installed on most of the machines we use, (Java issue…) so I ended up using a combination of Supercollider & Processing which is a bit redundant but allowed for greater flexibility in prototyping or changing functionality.
The introduction of Festival added a new set of issues. The challenge was to create a system that dynamically sent text to Festival for synthesis based on brainwave activity and another set of rules to trigger the wav files that Festival output. I used a combination of Festival, the command line and Python to create an action that was perpetually running in the background.
After this point I was hounded by the following error in Processing in relation to the oscP5 library which allows Processing to work with OSC messages:
to a method in your program. please check your code for any
possible errors that might occur in the method where incoming
OscMessages are parsed e.g. check for casting errors, possible
nullpointers, array overflows … .
method in charge : oscEvent java.lang.reflect.InvocationTargetException
The system would run despite this essentially worthless error message (it’s vagueness is not really an adequate debugging in a complex project) with occasional odd behavior, usually confined to a brief period of high latency.
In the last few days of the quarter I rebuilt the entire Processing code line by line, using slightly different strategies for controlling activities and change states. I am happy to report that the final rebuild throws no errors whatsoever.
Electronics & Circuit Design
The primary challenge in the NRS devices is centered around motor control and housings as well
In the case of the fan harness the use of a DC motor, rather than a stepper motor, is due to the desired scale of a wearable device. Motor control will be solved using a small potentiometer and pulse width modulation to create controlled gestures that are limited in range, never going past 180 degrees of motion, and creating distinct actions for the fan; slow opening and closing, rapid opening and closing, stuttering moves, etc. Time permitting I will add a second motor for a twisting motion that will significantly increase the gestural potential of the device. The housing and it’s relationship to the harness needs to be engineered so that the movement of the fan is isolated and doesn’t move anything else. A similar requirement for the solenoids means finding a design that provides rigid containment of the solenoid action for haptic reporting.
For the wall mounted rack & pinion system, a significant redesign at the outset of the quarter addressed many of the control and housing issues, but a rebuild is needed to examine how well these strategies work.
The Neuroprosthetic Research System functions as an experimental landscape in which to explore new aesthetic experiences of the body, or more generally the self, expanded into technologically mediated situations. I am most interested in locating a threshold for our neuroplasticity in remapping into abstract mechanisms.
Does that threshold have to do with the proximity of a prosthetic device to our natural body?
Is it tied up in situations that are closer to natural bodily function and expression? Is it a function of time and duration with the device?
At what point does a Neuroprosthetic device become integrated into our sense of self and what techniques support that process, or conversely hinder it?
473Movie on Flickr.
V$-drawing_machine_003 on Flickr.