"Music performance with 'imagery instrument' by real-time categorisation of brain activities" by Kiyoshi Furukawa (Tokyo University of the Arts, Japan), Tomasz M. Rutkowski (University of Tsukuba / RIKEN BSI, Japan), Takayuki Hamano (Japan Science and Technology Agency / RIKEN BSI / Tamagawa University, Japan), Hidefumi Ohmura (Japan Science and Technology Agency, Japan), Reiko Hoshi-Shiba (Tokyo University / RIKEN BSI, Japan), Hiroko Terasawa (University of Tsukuba / JST PRESTO, Japan), and Kazuo Okanoya (Tokyo University / Japan Science and Technology Agency, Japan) We introduce a new musical performance system based on the brain-computer interface (BCI) technology which transforms the brain-wave patterns of musical-chords imagination into structures of music, and presumably, of musical emotion. 'it's almost a song...' is a live performance piece demonstrating this new instrument. Many previous brain-wave-generated music converted EEG signals to sounds directly, that gives least control on musical context. However, in our system, we control and play music by imagining musical chords sequentially: we have developed a new music performance system which maps the categorized patterns of the human brain activity to a set of basic elements of music and musical chords. The piece is based on the structured chain of musical chords, an arrangement of the chords in time so that they relate to each other. Our performance is based on this chain, which is a fundamental music structure. We believe that the essence of music, such as emotion and expression, emerges out of this chain. 'it's almost a song...' is performed by a Brain Player (BP) with EEG system and a Clarinet player (CP) on the stage. The BP wears an EEG-measurement cap with electrodes, imagines a chord at a time to play one of three sound categories without physical movement, and produces sound from speakers using the system. The system extracts and categorizes individual brain activity patterns for each sound category using a machine-learning algorithm. The sound is generated in real-time by Max/MSP. The performers play music by repeating this sequence of tasks. The EEG data of the BP is also visualized on the screen in real-time. During a training period preceding each session, the performers practice to produce stable brain activity patterns for three sound categories. Performers for this presentation are Takayuki Hamano (brain player) and Nobuaki Motohama (clarinet), with sound operator, Kiyoshi Furukawa.