This plug-in encodes $N$ inputs signals into a 3D Ambisonic sound scene at order $M$ (which produces $(M+1)^2$ output signals). Each input signal represents a source which is spatialized with the coordinate parameters Radius, Azimuth, and Elevation (see coordinate system). A Gain parameter allows to adjust the input gain of the source. An example of GUI is shown in Fig.$~$1:
Plane/Spherical Wave choice
If the check-box Spherical Wave is unchecked, the input is encoded as a plane wave. In this case, the knob Radius is useless, because a plane wave has no distance information. Consequently, the gain of the input won’t change when the source moves, because the source can’t come closer or go farther from origin.
WARNING: If you bring the source too close to origin, the gain can be extremely loud!
If the check-box Spherical Wave is checked, the input is encoded as a spherical wave and the Radius knob is used to set the distance from the source to origin. Accordingly, the gain of the source will increase if the source moves closer and decrease if the source moves farther. The gain factor is $1/r$.
WARNING: If you bring the source too close to origin, the low frequencies can be extremely boosted at higher orders!
To encode a spherical wave with Ambisonics, stabilization filters are used in the DSP. They involve the knowledge of spherical playback system radius used at the decoding step. This is set with the entry Playback Speakers Radius. If you bring the source inside the playback system, the low frequencies of the higher order Ambisonic components are boosted. This “bass-boost” effect increases as the source comes closer.
To limit the aforementioned effects, the minimum possible Radius value is $0.5$ m.
This plug-in performs a highly selective directional filtering on the input Ambisonic scene. That is to say, only one direction, set with the parameters Azimuth and Elevation, is retained in the output Ambisonic scene.
With this tool, one can listen at a particular direction and explore the Ambisonic scene. The direct analogy is a flashlight: Imagine being in the dark and having the sound coming only from the direction where you point at with the flashlight.
The effect is activated/bypassed with the toggle On/Off. In order to smoothly transit from the original input scene to the filtered one, a crossfade is used when the effect is activated/deactivated. The duration of the transition (in seconds) is set with the Crossfade duration parameter. An example of GUI is shown in Fig. 1:
This plug-in performs a rotation of the Ambisonic sound scene around the z-axis. The azimuth rotation angle is counterclockwise and is set with a slider from $0^\circ$ to $360^\circ$ degree (see coordinate system) An example of GUI is shown in Fig. 1
This tutorial describes how to render the Ambisonic signals through headphones via binaural convolution.
The principle is as follows: Instead of sending the decoded signals to real loudspeakers, the signals are convolved with the Head Related Impulse Response (HRIR) of each loudspeakers with both ears and summed to obtain two signals : one for the left hear and one for the right ear.
This technique allows to use the flexibility of Ambisonics to render a three dimensional surround sound scape through headphones.
The result should be as is you were standing in a spherical loudspeaker array in place of the manikin in Fig. 1.
In addition of head tracking, the movement of your head could be tracked and use to rotate in consequence the Ambisonic sound scene. Thus, you would be as the manikin in grey in Fig 1. but free to rotate the head.
Head Related Transfer Functions (HRTF) are the frequency-domain equivalent of time domain HRIR. Since HRTF depend strongly on the head and torso geometry, each HRTF set is personal and the binaural immersion relies a lot on it. This means that if you’re not using your own set of HRTF the result may be not as immersive as if you were really standing in a spherical array of loudspeakers.
For the moment, only one set of HRTF is available with ambitools. This set is taken from reference  and it’s under a Creative Commons CC BY-SA 3.0 license the raw content is available here http://www.audiogroup.web.fh-koeln.de
It corresponds to the HRTF of a Neumann KU-100 dummy head in anechoic environment with Genelec 8260A loudspeaker, as shown on Fig. 2. The array we use here is a Lebedev grid with 50 loudspeakers which is able to works for 3D ambisonics up to order 5 .
Using Jconvolver under Linux with Jack
Jconvolver runs with a configuration file and associated filters impulse responses stored in wav files. To run the real-time multichannel convolution, follow the following steps:
The HRIR set is stored in ambitools in folder FIR/hrir/hrir_lebedev50/. For each virtual loudspeaker there is a left-ear right-ear pair of wav files. For example, 1st loudspeaker pair is named as hrir_1_L.wav for 1st loudspeaker on left-ear and hrir_1_R.wav for 1st loudspeaker on right-ear, and so on.
The configuration file is in the same folder: hrir_lebedev50.conf. Open a text editor to replace the line
with the actual absolute path to the folder with the wav files.
Once the file is correctly set save it and just run the following command in a terminal in the directory hrir_lebedev50:
$ jconvolver hrir_lebedev50.conf
When the convolution has started you should see a jack client with 50 inputs (50 loudspeakers signals) and 2 outputs (left and right headphone channels).
Finally, connect the decoder outputs hoa_decoder_lebedev50 to inputs of jconvolver. You can do it manually with your jack connection manager or by the following command in a terminal:
$for i in `seq 0 49`; do jack_connect "hoa_decoder_lebedev50:out_$i" "jconvolver:In-$((i+1))";done;
(note that this command can be found in the jack_connect_help file in folder Documentation of ambitools). The outputs of jconvolver can be connected to the outputs of your sound card and listened through headphones. The resulting connections in Claudia (KXStudio LADISH session manage) are visible in Fig. 3:
As in  the experimental setup involves a loudspeaker at radius 3.25 m from the center of the dummy head, this value should be reported in hoa_decoder_lebedev50 for near field compensation.
 B. Bernschütz, “A spherical far field hrir/hrtf compilation of the neumann ku 100,” in Proceedings of the 40th Italian (AIA) Annual Conference on Acoustics and the 39th German Annual Conference on Acoustics (DAGA) Conference on Acoustics, 2013, p. 29.
 P. Lecomte, P.-A. Gauthier, C. Langrenne, A. Garcia, and A. Berry, “On the use of a Lebedev grid for Ambisonics,” in Audio Engineering Society Convention 139, 2015.
The Faust Open Source Software Competition aims at promoting innovative high-quality free audio software developed with Faust, a functional programming language for realtime signal processing and sound synthesis. The competition is sponsored by GRAME, Centre National de Création Musicale
The Faust Award 2016 was attributed by an international committee composed of :
Jean-Louis Giavitto (IRCAM, Paris, France),
Albert Graef (Johannes Gutenberg U., Mainz, Germany),
Pierre Jouvelot (Ecole des Mines, Paris, France),
Victor Lazzarini (Maynooth U., Maynooth, Ireland),
Romain Michon (Stanford U., Palo Alto, USA)
Yann Orlarey (GRAME, Lyon, France),
Dave Phillips (musician, journalist, and educator, USA)
Laurent Pottier (U. Jean Monnet, Saint-Etienne, France),
Julius Smith (Stanford U., Palo Alto, USA)
to Ambitools, a set of tools for real-time 3D sound field synthesis using higher order ambisonics (HOA).
Ambitools is developed by Pierre Lecomte, a PhD candidate at Conservatoire National des Arts et Metiers and Sherbrooke University. The core of the sound processing is written in Faust. The tools contain HOA encoders, decoders, binaural-filters, HOA signals transformations, spherical VU-Meter, etc. and can be compiled in various plug-ins format under Windows, Mac OSX, and Linux.
The jury praised the quality and the usefulness of Ambitools: a really useful and technically advanced Faust app and an impressive technical achievement ! Check the demo.
The idea was to spatialize Théo’s piece “La Séance d’Hypnose” with Ambitools and a mobile Ambisonic playback system. Most of the piece was pre-spatialized and played back in loop while two monophonic tracks were left to spatialize as sound sources. Thus, while the listeners could seat and enjoy the piece, one of them could control the trajectories and spatialize two of the sources with a remote. See also here.
The event went on Thursday May 16th 2018 inside the work “Respirare l’Ombra” at Centre Pompidou. Here are some picture:
As well, a binaural version of the mix is available below: