HP Labs researchers are using a novel approach anchored in artificial intelligence, signal processing, and psychoacoustics to create spatial audio for virtual reality (VR) and other media, adding a rich and immersive experience.
“At its simplest, spatial audio recreates the perception of localization using psychoacoustically-motivated signal processing techniques,” explains Sunil Bharitkar, HP Distinguished Technologist and audio research lead in HP’s Artificial Intelligence and Emerging Compute Lab. “But today, due to its perceptual transparency, technology designed at HP allows us to go much further and produce the sense that we are actually present within a virtual 3D space.”
That’s especially true when listeners use VR headphones, which allow for fine control of a listener’s auditory environment. But it’s also possible to make the audio emanating from PCs, laptops, and even smart speakers appear to be coming from very specific distances and directions.
Spatial audio is the result of more than just having better technology to play it on. Our understanding of how humans listen and speak has also vastly improved, says Bharitkar. “And because we have a better understanding of psychoacoustics, we can combine that with deep-learning and signal processing techniques in a novel way that allows us to design generalizable models that scale to arbitrary listeners/consumers and provide a fundamentally new experience,” he suggests.
This approach is being pioneered at HP Labs, where Bharitkar leads a team charged with exploring how HP can research, develop, and deploy spatial audio across a range of products that the company offers