All posts by sekisushai

Coordinate system

Ambisonics are usually described in a spherical coordinate system. In Ambitools, the following coordinate system is used and shown in Fig. 1:

$x = r \cos(\theta) \cos(\delta), \quad y = r \sin(\theta) \cos(\delta), \quad z = r \sin(\delta).$


Figure1. Spherical coordinate system in use. A point $P (x,y,z)$ is described by radius $r$, azimuth $\theta$ and elevation $\delta$.

The azimuth angle $\theta \in [0~~360^\circ]$.
The elevation angle $\delta \in [-90^\circ~~90^\circ]$.
The convention for rotation is counterclockwise direction (i.e. trigonometric direction).

Thus, the main directions from the listener point of view are:

  • Front: $(\theta = 0^\circ, \delta=0^\circ)$
  • Left: $(\theta = 90^\circ, \delta=0^\circ)$
  • Rear: $(\theta = 180^\circ, \delta=0^\circ)$
  • Right: $(\theta = 270^\circ, \delta=0^\circ)$
  • Top: $(\theta = 0^\circ, \delta = 90^\circ)$
  • Bottom: $(\theta = 0^\circ, \delta = -90^\circ)$



  • Inputs: $N$
  • Outputs: $(M+1)^2$
  • Gain
  • Radius (for spherical waves)
  • Azimuth
  • Elevation
  • Plane/Spherical Wave toggle
  • Playback Speakers Radius (for spherical wave)


This plug-in encodes $N$ inputs signals into a 3D Ambisonic sound scene at order $M$ (which produces $(M+1)^2$ output signals). Each input signal represents a source which is spatialized with the coordinate parameters Radius, Azimuth, and Elevation (see coordinate system). A Gain parameter allows to adjust the input gain of the source. An example of GUI is shown in Fig.$~$1:

Figure1. hoa_encoder compiled for Linux JACK-Qt. In this case $M=5$ and $N=3$. The VU-meters show the output signal level for each Ambisonic component. They are sorted by row for each Ambisonic order from $m=0$ up to $m=5$.

Additional information

Plane/Spherical Wave choice

If the check-box Spherical Wave is unchecked, the input is encoded as a plane wave. In this case, the knob Radius is useless, because a plane wave has no distance information. Consequently, the gain of the input won’t change when the source moves, because the source can’t come closer or go farther from origin.

WARNING: If you bring the source too close to origin, the gain can be extremely loud!

If the check-box Spherical Wave is checked, the input is encoded as a spherical wave and the Radius knob is used to set the distance from the source to origin. Accordingly, the gain of the source will increase if the source moves closer and decrease if the source moves farther. The gain factor is $1/r$.

WARNING: If you bring the source too close to origin, the low frequencies can be extremely boosted at higher orders!

To encode a spherical wave with Ambisonics, stabilization filters are used in the DSP. They involve the knowledge of spherical playback system radius used at the decoding step. This is set with the entry Playback Speakers Radius. If you bring the source inside the playback system, the low frequencies of the higher order Ambisonic components are  boosted. This “bass-boost” effect increases as the source comes closer.

To limit the aforementioned effects, the minimum possible Radius value is $0.5$ m.



  • Inputs: $(M+1)^2$
  • Outputs: $(M+1)^2$
  • Gain
  • Azimuth
  • Elevation
  • On/Off
  • Crossfade duration


This plug-in performs a highly selective directional filtering on the input Ambisonic scene. That is to say, only one direction, set with the parameters Azimuth and Elevation, is retained in the output Ambisonic scene.

With this tool, one can listen at a particular direction and explore the Ambisonic scene.  The direct analogy is a flashlight: Imagine being in the dark and having the sound coming only from the direction where you point at with the flashlight.

The effect is activated/bypassed with the toggle On/Off. In order to smoothly transit from the original input scene to the filtered one, a crossfade is used when the effect is activated/deactivated. The duration of the transition (in seconds) is set with the Crossfade duration parameter. An example of GUI is shown in Fig. 1:

Figure1. hoa_beamforming_dirac_to_hoa compiled for Linux JACK-Qt.

Binaural rendering with Jconvolver


This tutorial describes how to render the Ambisonic signals through headphones via binaural convolution.

The principle is as follows: Instead of sending the decoded signals to real loudspeakers, the signals are convolved with the Head Related Impulse Response (HRIR) of each loudspeakers with both ears and summed to obtain two signals : one for the left hear and one for the right ear.
This technique allows to use the flexibility of Ambisonics to render a three dimensional surround sound scape through headphones.

Figure 1: The virtual source (red dot) is reproduced with Ambisonics on the spherical loudspeaker array. Each virtual loudspeaker (color balls) radiates a driving signal. The binaural rendering is made by convolving for each loudspeaker the driving signal with the corresponding head related impulse response: between the loudspeaker and the ear (manikin in grey).

The result should be as is you were standing in a spherical loudspeaker array in place of the manikin in Fig. 1.

In addition of head tracking, the movement of your head could be tracked and use to rotate in consequence the Ambisonic sound scene. Thus, you would be as the manikin in grey in Fig 1. but free to rotate the head.

Head Related Transfer Functions (HRTF) are the frequency-domain equivalent of time domain HRIR. Since HRTF depend strongly on the head and torso geometry, each HRTF set is personal and the binaural immersion relies a lot on it. This means that if you’re not using your own set of HRTF the result may be not as immersive as if you were really standing in a spherical array of loudspeakers.

HRTF set

For the moment, only one set of HRTF is available with ambitools. This set is taken from reference [1] and it’s under a Creative Commons CC BY-SA 3.0 license the raw content is available here

It corresponds to the HRTF of a Neumann KU-100 dummy head in anechoic environment with Genelec 8260A loudspeaker, as shown on Fig. 2. The array we use here is a Lebedev grid with 50 loudspeakers which is able to works for 3D ambisonics up to order 5 [2].

Figure 2 : The HRTF set from [1] are obtained for a Neumann KU-100 dummy head and Gelenec 8260A loudspeaker.  Credit photo : Philipp Stade

Using Jconvolver under Linux with Jack

Jconvolver runs with a configuration file and associated filters impulse responses stored in wav files. To run the real-time multichannel convolution, follow the following steps:

  • The HRIR set is stored in ambitools in folder FIR/hrir/hrir_lebedev50/. For each virtual loudspeaker there is a left-ear right-ear pair of wav files. For example, 1st loudspeaker pair is named as hrir_1_L.wav for 1st loudspeaker on left-ear and hrir_1_R.wav for 1st loudspeaker on right-ear, and so on.
  • The configuration file is in the same folder: hrir_lebedev50.conf. Open a text editor to replace the line
    #/cd /put/here/your/absolute/path/to/ambitools/FIR/hrir/hrir_lebedev50/

    with the actual absolute path to the folder with the wav files.

  • Once the file is correctly set save it and just run the following command in a terminal in the directory hrir_lebedev50:
    $ jconvolver hrir_lebedev50.conf

    When the convolution has started you should see a jack client with 50 inputs (50 loudspeakers signals) and 2 outputs (left and right headphone channels).

  • Finally, connect the decoder outputs hoa_decoder_lebedev50 to inputs of jconvolver. You can do it manually with your jack connection manager or by the following command in a terminal:
    $for i in `seq 0 49`; do jack_connect "hoa_decoder_lebedev50:out_$i" "jconvolver:In-$((i+1))";done;

    (note that this command can be found in the jack_connect_help file in folder Documentation of ambitools). The outputs of jconvolver can be connected to the outputs of your sound card and listened through headphones. The resulting connections in Claudia (KXStudio LADISH session manage) are visible in Fig. 3:

Connection for jconvolver in Claudia
Connection for jconvolver in Claudia

As in [1] the experimental setup involves a loudspeaker at radius 3.25 m from the center of the dummy head, this value should be reported in hoa_decoder_lebedev50 for near field compensation.


[1] B. Bernschütz, “A spherical far field hrir/hrtf compilation of the neumann ku 100,” in Proceedings of the 40th Italian (AIA) Annual Conference on Acoustics and the 39th German Annual Conference on Acoustics (DAGA) Conference on Acoustics, 2013, p. 29.

[2] P. Lecomte, P.-A. Gauthier, C. Langrenne, A. Garcia, and A. Berry, “On the use of a Lebedev grid for Ambisonics,” in Audio Engineering Society Convention 139, 2015.

Ambitools won the Faust Awards 2016!


The Faust Open Source Software Competition aims at promoting innovative high-quality free audio software developed with Faust, a functional programming language for realtime signal processing and sound synthesis. The competition is sponsored by GRAME, Centre National de Création Musicale

The Faust Award 2016 was attributed by an international committee composed of :

  • Jean-Louis Giavitto (IRCAM, Paris, France),
  • Albert Graef (Johannes Gutenberg U., Mainz, Germany),
  • Pierre Jouvelot (Ecole des Mines, Paris, France),
  • Victor Lazzarini (Maynooth U., Maynooth, Ireland),
  • Romain Michon (Stanford U., Palo Alto, USA)
  • Yann Orlarey (GRAME, Lyon, France),
  • Dave Phillips (musician, journalist, and educator, USA)
  • Laurent Pottier (U. Jean Monnet, Saint-Etienne, France),
  • Julius Smith (Stanford U., Palo Alto, USA)

to Ambitools, a set of tools for real-time 3D sound field synthesis using higher order ambisonics (HOA).

Ambitools is developed by Pierre Lecomte, a PhD candidate at Conservatoire National des Arts et Metiers and Sherbrooke University. The core of the sound processing is written in Faust. The tools contain HOA encoders, decoders, binaural-filters, HOA signals transformations, spherical VU-Meter, etc. and can be compiled in various plug-ins format under Windows, Mac OSX, and Linux.

The jury praised the quality and the usefulness of Ambitools: a really useful and technically advanced Faust app and an impressive technical achievement ! Check the demo.

“Séance d’Hypnose” at Centre Pompidou

For the “Soirées Sonore #5” at Centre Pompidou, Paris, I made  a collaboration with the french sound designer/composer Théo Radakovitch.

The idea was to spatialize  Théo’s piece “La Séance d’Hypnose” with Ambitools and a mobile Ambisonic playback system. Most of the piece was pre-spatialized and played back in loop while two monophonic tracks were left to spatialize as sound sources. Thus, while the listeners could seat and enjoy the piece, one of them could control the trajectories and spatialize two of the sources with a remote. See also here.

The event went on Thursday May 16th 2018 inside the work “Respirare l’Ombra” at Centre Pompidou. Here are some picture:


As well, a binaural version of the mix is available below:

1st International Faust Conference

Ambitools was presented with demonstrations at the 1st international Faust TheConference (IFC-18) in Mainz, Germany.

The conference aimed at “gathering developers and users of the Faust programming language to present current projects and discuss future directions for Faust and its community”.

In this context, I gave a talk on Ambitools. The video of the talk is visible here:

As well, I gave some demonstrations with my personnal Ambisonic playback system.


The German television and radio featured IFC and talked a bit about Ambitools, see here and here (both in German).

Thanks a lot to the IFC committee and the Faust community for this conference, and thanks to Maximilian Schönherr for the interview on the german national public radio.