Developer: | |
Type: | speech codec |
Extension: | .lyra |
Latest Release Version: | 1.3.2 |
Free: | Yes (Apache-2.0) |
Lyra is a lossy audio codec developed by Google that is designed for compressing speech at very low bitrates. Unlike most other audio formats, it compresses data using a machine learning-based algorithm.
The Lyra codec is designed to transmit speech in real-time when bandwidth is severely restricted, such as over slow or unreliable network connections.[1] It runs at fixed bitrates of 3.2, 6, and 9 kbit/s and it is intended to provide better quality than codecs that use traditional waveform-based algorithms at similar bitrates.[2] Instead, compression is achieved via a machine learning algorithm that encodes the input with feature extraction, and then reconstructs an approximation of the original using a generative model. This model was trained on thousands of hours of speech recorded in over 70 languages to function with various speakers. Because generative models are more computationally complex than traditional codecs, a simple model that processes different frequency ranges in parallel is used to obtain acceptable performance.[3] Lyra imposes 20 ms of latency due to its frame size. Google's reference implementation is available for Android and Linux.
Lyra's initial version performed significantly better than traditional codecs at similar bitrates.[4] Ian Buckley at MakeUseOf said, "It succeeds in creating almost eerie levels of audio reproduction with bitrates as low as 3 kbps." Google claims that it reproduces natural-sounding speech, and that Lyra at 3 kbit/s beats Opus at 8 kbit/s. Tsahi Levent-Levi writes that Satin, Microsoft's AI-based codec, outperforms it at higher bitrates.
In December 2017, Google researchers published a preprint paper on replacing the Codec 2 decoder with a WaveNet neural network. They found that a neural network is able to extrapolate features of the voice not described in the Codec 2 bitstream and give better audio quality, and that the use of conventional features makes the neural network calculation simpler compared to a purely waveform-based network. Lyra version 1 would reuse this overall framework of feature extraction, quantization, and neural synthesis.[5]
Lyra was first announced in February 2021, and in April, Google released the source code of their reference implementation. The initial version had a fixed bitrate of 3 kbit/s and around 90 ms latency. The encoder calculates a log mel spectrogram and performs vector quantization to store the spectrogram in a data stream. The decoder is a WaveNet neural network that takes the spectrogram and reconstructs the input audio.[2]
A second version (v2/1.2.0), released in September 2022, improved sound quality, latency, and performance, and permitted multiple bitrates. V2 uses a "SoundStream" structure where both the encoder and decoder are neural networks, a kind of autoencoder. A residual vector quantizer is used to turn the feature values into transferrable data.[6]
Google's implementation is available on GitHub under the Apache License.[7] Written in C++, it is optimized for 64-bit ARM but also runs on x86, on either Android or Linux.
Google Duo uses Lyra to transmit sound for video chats when bandwidth is limited.