The performer, who is surrounded by sound and images, interacts with them using an EEG (electroencephalogram) interface, which measures the performer’s brain activity. Those sounds and images (already stored in the computer) are modified consequently by the brain data via MAX/MSP-Jitter. Hence, the performer determines how those combinations will be revealed to the audience.