Bât. Breguet C4.26
3 rue Joliot Curie
91190 Gif-sur-Yvette, France
ULTRA-LOW BITRATE VIDEO CONFERENCING USING DEEP IMAGE ANIMATION (ICASSP 2021)
In this work we propose a novel deep learning approach for ultra-low bitrate video compression for video conferencing applications. To address the shortcomings of current video compression paradigms when the available bandwidth is extremely limited, we adopt a model-based approach that employs deep neural networks to encode motion information as keypoint displacement and reconstruct the video signal at the decoder side. The overall system is trained in an end-to-end fashion minimizing a reconstruction error on the encoder output. Objective and subjective quality evaluation experiments demonstrate that the proposed approach provides an average bitrate reduction for the same visual quality of more than 80% compared to HEVC.
A HYBRID DEEP ANIMATION CODEC FOR LOW-BITRATE VIDEO CONFERENCING (ICIP 2022)
Deep generative models, and particularly facial animation schemes, can be used in video conferencing applications to efficiently compress a video through a sparse set of keypoints, without the need to transmit dense motion vectors. While these schemes bring significant coding gains over conventional video codecs at low bitrates, their performance saturates quickly when the available bandwidth increases. In
this paper, we propose a layered, hybrid coding scheme to overcome this limitation. Specifically, we extend a codec based on facial animation by adding an auxiliary stream consisting of a very low bitrate version of the video, obtained
through a conventional video codec (e.g., HEVC). The animated and auxiliary videos are combined through a novel fusion module. Our results show consistent average BD-Rate
gains in excess of -30% on a large dataset of video conferencing sequences, extending the operational range of bitrates of a facial animation codec a
bât. Bréguet, 3, rue Joliot Curie,
©2023 L2S - All rights reserved, reproduction prohibited.