How a dancer with ALS used brainwaves to perform live!

# neurotechnology# bci# als# biomedicalengineering
How a dancer with ALS used brainwaves to perform live!Mariano Gobea Alcoba

The Engineering Architecture of Brain-Computer Interfaces in Kinetic Performance The...

The Engineering Architecture of Brain-Computer Interfaces in Kinetic Performance

The integration of Brain-Computer Interfaces (BCI) into performance arts represents a convergence of neurophysiology, signal processing, and real-time control systems. When applied to individuals suffering from Amyotrophic Lateral Sclerosis (ALS), the objective transcends artistic expression; it necessitates a robust, low-latency pipeline capable of mapping cortical activity to kinetic actuators or digital visual environments. This article examines the technical stack, signal acquisition challenges, and control theory required to execute such a system.

The Signal Acquisition Layer

The primary hurdle in high-fidelity BCI integration is the Signal-to-Noise Ratio (SNR) of Electroencephalography (EEG) data. In a performance environment—characterized by electrical noise from stage lighting, movement-induced artifacts (electromyography or EMG), and the inherent impedance fluctuations of dry electrode systems—the acquisition chain must be sophisticated.

High-density EEG systems typically operate at sampling frequencies between 250Hz and 1000Hz. To isolate the relevant oscillations, such as Mu (8–13 Hz) or Beta (13–30 Hz) rhythms associated with motor imagery, a multi-stage filtering pipeline is required:

import numpy as np
from scipy.signal import butter, lfilter

def butter_bandpass(lowcut, highcut, fs, order=5):
    nyq = 0.5 * fs
    low = lowcut / nyq
    high = highcut / nyq
    b, a = butter(order, [low, high], btype='band')
    return b, a

def preprocess_eeg_signal(data, fs):
    # Remove DC offset and drift
    b, a = butter_bandpass(8.0, 30.0, fs, order=5)
    filtered_data = lfilter(b, a, data)

    # Notch filter for 50Hz/60Hz power line interference
    # Application of a comb filter or sharp notch is critical in studio environments
    return filtered_data
Enter fullscreen mode Exit fullscreen mode

Feature Extraction and Latency Constraints

For live performance, the temporal latency between neural intent and system output must remain below 100 milliseconds to maintain the illusion of seamless synchronization. Feature extraction generally relies on the Power Spectral Density (PSD) calculated via Welch’s method or Fast Fourier Transform (FFT) over sliding windows.

In the context of an ALS-afflicted performer, motor command signals are often attenuated or redirected to non-primary motor cortices. Consequently, the system must utilize Common Spatial Patterns (CSP) to maximize the variance between task-specific states (e.g., "imagine movement" vs. "rest").

// Pseudocode for real-time feature extraction buffer
struct SignalWindow {
    float data[1024];
    int head;
};

float calculate_mu_power(SignalWindow* window) {
    // Perform FFT and integrate power in the 8-13Hz bin
    // Normalize against total signal power to account for impedance shift
    float total_power = compute_fft_total_power(window->data);
    float mu_power = compute_fft_band_power(window->data, 8, 13);
    return mu_power / total_power;
}
Enter fullscreen mode Exit fullscreen mode

Mapping Cortical Intent to Digital Actuators

Once the neural features are extracted, they are mapped to an OSC (Open Sound Control) or MIDI stream. This mapping layer acts as a heuristic bridge. Because raw EEG data is volatile, applying direct linear mapping often results in "jittery" performance. Implementing a Kalman filter or a Simple Exponential Smoothing (SES) algorithm is essential to ensure that the kinetic output—whether it is a lighting sequence, a robotic movement, or a digital visual synthesis—appears intentional rather than stochastic.

The Control Feedback Loop

The performer experiences a closed-loop system. As the dancer observes the visual response to their neural state, they undergo neuroplastic modulation, effectively "learning" to control the BCI by altering their focus to achieve the desired visual output. This is a form of operant conditioning. The system must adapt to the performer's shifting baseline over the course of the performance; static thresholds will invariably fail as the performer fatigues.

Artifact Rejection: The Performance Hurdle

The most significant technical challenge remains the "non-neural" artifact. In a performance context, even minimal physical movement by the dancer creates EMG artifacts that swamp the EEG signal. Standard commercial BCI rigs often utilize Independent Component Analysis (ICA) to strip these artifacts in post-processing, but for live implementation, this is computationally expensive.

Modern implementations utilize a "trigger-based" gating system:

  1. Detection: Identify large amplitude spikes in the high-frequency domain (>40Hz).
  2. Masking: If an artifact is detected, the system holds the last known valid state for the duration of the interference (usually 100–300ms).
  3. Recovery: Re-initialize the signal buffer to avoid "smearing" the artifact into subsequent calculations.

Implications for Human-Machine Augmentation

The technical architecture required to allow an ALS patient to perform live is essentially a specialized form of a Brain-Computer Interface (BCI). The core achievement is the transition from a diagnostic or medical tool to an artistic utility.

From an engineering perspective, this requires an abstraction layer that treats the brain as a primary input device. The complexity lies not in the hardware—which is increasingly commoditized—but in the signal integrity and the adaptive mapping algorithms that bridge the gap between neurological patterns and real-time output. Future iterations of these systems will likely integrate Transformer-based neural decoding, allowing for more nuanced recognition of complex intent beyond binary (on/off) or linear (amplitude-based) control.

Systems Integration Strategy

Successful implementation requires a distributed approach to ensure redundancy. The acquisition hardware should communicate via a low-latency protocol (e.g., UDP/IP) to a dedicated processing workstation. This workstation isolates the BCI logic from the rendering or motor control logic.

graph LR
    A[Electrodes] --> B[Amplifier/ADC]
    B --> C[DSP Workstation]
    C --> D[Mapping Engine]
    D --> E[OSC Stream]
    E --> F[Performance Controller]
    F --> G[Visuals/Actuators]
Enter fullscreen mode Exit fullscreen mode

The separation of the DSP workstation from the Performance Controller is a critical design decision. The DSP workstation handles the high-throughput, high-latency math (FFT, filtering, artifact rejection), while the Performance Controller handles the deterministic execution of artistic cues. This modularity allows for the "hot-swapping" of neural models without disrupting the artistic execution flow, an essential requirement for live, high-pressure environments.

As we continue to push the boundaries of neural interfaces, the focus must remain on the robustness of the data pipeline. Reliability in a clinical setting is measured by accuracy; reliability in an artistic setting is measured by the fluidity of the output. When these two metrics align, the result is a profound expansion of human agency through digital architecture.

For organizations looking to integrate advanced sensor fusion, real-time signal processing, or BCI architecture into their internal research or product stacks, please visit https://www.mgatc.com for consulting services.


Originally published in Spanish at www.mgatc.com/blog/brainwaves-dancer-als-performance/