HomeGeneralNeural Talking-Head Video Synthesis Meets Video Conferencing

Neural Talking-Head Video Synthesis Meets Video Conferencing [Video]

A team from NVIDIA is working on a neural talking-head video synthesis model, and they demonstrate the results for a video conferencing use case. The model “learns” from source material and is then able to reconstruct and results in a different signal output. Another benefit here is that they achieve a similar visual quality of the image with an H.264 codec even with only one-tenth of the bandwidth. Another application could also help to improve the degree of immersion due to the options for head rotation. This idea could be leveraged in various ways, but in summary, it should prove to be useful in the future. Involved in the project were Ting-Chun Wang, Arun Mallya, and Ming-Yu Liu.

YouTube: One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing

Photo credit: The feature image is a still frame from the presented video and is owned by Ting-Chun Wang / NVIDIA.

Was this post helpful?

Christopher Isakhttp://www.christopherisak.com
Hi there and thanks for reading my article! I'm Chris the founder of TechAcute. I write about technology news and share experiences from my life in the enterprise world. Drop by on Twitter and say 'hi' sometime. ;)