Apple Research Indicates Potential Future AirPods Could Interpret Neural Signals

Apple Research Indicates Potential Future AirPods Could Interpret Neural Signals

Apple Research Indicates Potential Future AirPods Could Interpret Neural Signals


A recent study conducted by Apple researchers introduces a technique that enables an AI model to understand one element of the configuration of brain electrical activity without needing any annotated data. Here’s the breakdown.

## PAirwise Relative Shift

In a fresh study titled “Learning the relative composition of EEG signals using pairwise relative shift pretraining,” Apple presents PARS, an acronym for PAirwise Relative Shift.

Existing models are heavily dependent on data annotated by humans for brain activity, clarifying which segments align with Wake, REM, Non-REM1, Non-REM2, and Non-REM3 sleep stages, along with the initiation and conclusion points of seizure occurrences, and more.

Apple’s approach was essentially to enable a model to autonomously predict the temporal distances between different segments of brain activity using raw, unlabeled data.

From the study:

> “Self-supervised learning (SSL) provides a compelling strategy for deriving electroencephalography (EEG) representations from unlabeled data, significantly diminishing the reliance on costly annotations for clinical purposes like sleep staging and seizure identification. While current EEG SSL techniques mainly apply masked reconstruction methods such as masked autoencoders (MAE) that capture local temporal patterns, the potential of position prediction pretraining to understand long-range dependencies in neural signals has been underutilized. We introduce PAirwise Relative Shift or PARS pretraining, an innovative pretext task that estimates relative temporal shifts between randomly selected EEG window pairs. Differing from reconstruction-based techniques that target local pattern recovery, PARS promotes encoders to grasp relative temporal composition and long-range dependencies present in neural signals. Through extensive assessment on a variety of EEG decoding tasks, we demonstrate that transformers pretrained with PARS consistently surpass existing pretraining methods in label-efficient and transfer learning contexts, establishing a new framework for self-supervised EEG representation learning.”

In simpler terms, the researchers identified that current methods chiefly instruct models to bridge small gaps in the signal. Consequently, they investigated if an AI could directly learn the broader composition of EEG signals from raw, unlabeled data.

It turns out, it can.

In the paper, they unveil a self-supervised learning method that predicts how small segments of an EEG signal are temporally related, which might enhance performance across various EEG analysis tasks, ranging from sleep stages to seizure detection.

The outcomes were encouraging, as the PARS-pretrained model either outperformed or equaled previous methods on three out of four distinct EEG benchmarks tested.

## But what does it have to do with AirPods?

The four datasets employed by the PARS-pretrained model were:

1. Wearable Sleep Staging (EESM17)
2. Abnormal EEG Detection (TUAB)
3. Seizure Detection (TUSZ)
4. Motor Imagery (PhysioNet-MI).

The first dataset, EESM17, stands for Ear-EEG Sleep Monitoring 2017, which consists of “overnight recordings from 9 subjects with a 12-channel wearable ear-EEG system and a 6-channel scalp-EEG system.”

Though the ear-EEG utilizes different electrodes than a standard scalp system, it can still independently capture numerous clinically significant brain signals, such as sleep stages and specific seizure-related patterns.

Because the EESM17 dataset was employed in a study conducted by Apple, which has integrated several health sensors into its wearables in recent years, it’s easy to envision a future where AirPods are equipped with EEG sensors, similar to how the AirPods Pro 3 recently incorporated a photoplethysmograph (PPG) sensor for heart rate monitoring.

And here’s the twist: in 2023, Apple submitted a patent application for “a wearable electronic device for measuring biosignals of a user.”

The patent explicitly mentions ear-EEG devices as an alternative to a scalp system, while also outlining their limitations:

> “Brain activity can be monitored using electrodes affixed to the scalp of a user. The electrodes may, at times, be situated inside or around the outer ear of the user. Monitoring brain activity using electrodes positioned in or around the outer ear may be advantageous due to factors such as decreased device mobility and lesser visibility of the electrodes compared to other devices that necessitate electrodes to be placed on visible areas around the user’s scalp. However, to obtain precise measurements of brain activity using an ear-electroencephalography (EEG) device, the ear-EEG device might need to be tailored to the user’s ear (e.g., potentially customized for the user’s concha, ear canal, tragus, etc.), and may require customization for different users so that the electrodes on the ear-EEG device can maintain continuous contact with the user’s body. Given that an ear’s dimensions and shape vary from individual to individual, and that a single user’s ear dimensions and the size and shape of components such as the user’s ear canal,