The Spikiss Project

Composing music from awake neuronal activity

Alain Destexhe and Luc Foubert

June 2016.

In the present page, we explain how we have translated human brain activity into music. In the song "Wake Beats", we have used recordings at the level of single neurons in human subjects, and in particular we have used the spikes from identified excitatory or inhibitory neurons, to drive music. Of course, this was made under certain arbitrary rules to make the music enjoyable! We explain below how we have done this, step by step.

Recordings of single neurons

We provide here some general explanation for the non specialist, about the type of signals that we use in our project. If you are familiar with neural recording techniques, you can skip this section.

Neurons emit electrical impulses called "spikes", that can be recorded using microelectrodes, as typically shown in Figure 1. This example shows the signals from 4 microelectrodes, during a few seconds. In these 4 simultaneous recordings, one can see very well spikes which appear as vertical deflections (green, also indicated by red dots). Each one of these spikes is generated here by a single neuron, and one different neuron is seen by each electrode, so we see here the activity of 4 neurons altogether.

Figure 1: Example of neuronal activity recorded by four micro-electrodes in the brain.

The 4 signals shown come from 4 different micro-electrodes, each of which detects the impulse activity (spike) from a different neuron. The four different simultaenously-recorded neurons are labeled "Cell 1" to "Cell 4". For each cell, the spikes are seen as sharp and brief deflections (vertical green bars). The red dots on the top of each signal indicate the time of each spike.

The electrophysiologists routinely record neurons with such microelectrodes. Instead of representing the full signal, as in Fig. 1, it is more compact to detect the spikes numerically (red dots), and display them as a series of dots, as represented in Fig. 2. Such a compact representation only contains the timing of the different spikes of each cell, and is called a "raster". We will see below several examples of such rasters.

Figure 2: Raster representation of the spike times of four neurons.

In this compact representation, only the occurrence and timing of spikes is shown. Each red dot represents one spike that was detected (from Fig. 1.) The ensemble of neurons represented as a series of dots is called a raster representation (also called "rasterplot") and is routinely used in neurophysiology.

How to convert neural recordings into music?

To convert neural recordings into music, and in particular spikes recordings, we will proceed by examples. We take advantage of the fact that neuron spikes are impulses, well defined in time, so we can use them to trigger a particular sound. When the neuron emits a spike, the sound is played, in a way that each neuron has his private sound, as shown in the examples below.

Example 1. As a first example, let us consider one single neuron (for example "Cell 2" above). One can select a "bass" sound, and play this sound each time the neuron emits a spike.

To listen to that example, click on "Bass Neuron".

Example 2. Let's try now a more complicated combination, where 4 different meurons are played simultaneously. To keep the identity of each neuron, each neuron will play a different percussion instrument in a drum kit. The raster of these 4 neurons is:

Figure 3: Raster of 4 neurons in an awake human subject, mapped on the C-major diatonic scale.

These neurons were chosen because they are particularly rhythmic. They are all inhibitory ("fast spiking") neurons.

To listen to that example, click on "Drumkit Neurons".

It is amazingly rhythmic, isn't it ?

Example 3. We now take 5 different neurons, corresponding to the raster:

Figure 4: Raster of 5 neurons in an awake human subject (C-major diatonic scale).

Similar to Fig. 3, these neurons are inhibitory.

We now associate these neurons to a "steel drum" sound, and each neuron corresponds to one note on the steel drum. In other words, each time the neuron spikes, the note specific to that neuron is played once. Thus, each neuron has his own note, and the melody is here created here by 5 neurons playing together on a steel drum.

To listen to that example, click on "Steeldrum Neurons".

Example 4. In this example, we show that other sounds than percussion are possible. For example, one can take a slow sound (with a slow attack and a long decay time), and associate this sound to the same neurons as in Example 3. Similarly, each neuron still has his own note.

To listen to that example, click on "Woo-woo Neurons".

Example 5. We can now try to combine some of the examples above. Combining Example 1 (bass) with Example 4 (woo-woo), gives a more elaborated combination.

To listen to that example, click on "Woo-woo and Bass".

Example 6. Still going further in complexity, let us now consider the activity of 14 excitatory neurons, which correspond to the following raster:

Figure 5: Raster of 14 neurons in an awake human subject, mapped on the C-major diatonic scale.

These neurons are all excitatory ("regular spiking") neurons, and they were selected among a larger set of neurons.

Let us associate these neurons to an instrument, a synthetic bell. As above, to keep track of the neuron's identity, we assign a specific note to each neuron, and that note is played once when this neuron fires a spike.

To listen to that example, click on "Synthetic Bells".

Example 7. Finally, to illustrate a first combination of several instruments, we combined the examples above into a mix.

To listen to that example, click on "First mix". In this example, all neurons considered were simultaneously recorded by a system of 100 micro-electrodes. Also note that in all of the above, the activity was slowed down by about 30% compared to real-time.

A full song using neurons recorded in an awake subject

Now, we are ready to build our first song. We have created a piece called "Wake Beats", where the first minute is taken from Example 7 above:

In the first part of the song (first minute), the activity of the different neurons is strictly respected, as well as the respective timing of the spikes of the different cells. However, in the second part (after the first minute), we have followed a different strategy. We have selected, from the neuronal activity, periods where the activity of the neurons was particularly rhythmic and interesting (based on totally arbitrary criteria!). We have isolated these periods, defining "loops" that can be played several times and in any order. This was done for the different sections, such as the bass and rhythmic activity, as well as for the melodic activity. In addition, we have used more sophisticated sounds from analog and/or digital synthesizers. So in this second part, the respective timings of one cell against another were not always respected, but all the music comes from neuronal activity.

Alain Destexhe and Luc Foubert, UNIC, CNRS, Gif sur Yvette, June 2016.

Soundclick link

Internet Archive link1, link2 link3

Soundcloud link (available soon...)

Copyright note:

We decided to distribute this music freely, under the protection of a Creative Commons Licence "share alike non commercial" (see below). This means that you are welcome to share and edit the present work, under the condition that you give us proper acknowledgment, and also distribute it freely (and give us a copy!). No commercial application please, unless we have an agreement. The official licence information is pasted below:

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. You are free to copy, distribute, display, and perform the work, as well to make derivatives based on this work, but under conditions that (1) the authors are acknowledged, (2) that no commercial use is made, and (3) that the same "Share Alike" licence is given to any use of this work.