We’ve gotten the suck-y, boring dystopian future. Let’s get the cool futuristic sci-fi stuff, too, and use it in streams, eh? Face tracking in Notch, noise removal in VST3, and hardware-accelerated video transmission without capture cards. Uh… yes, please, all of that.

NVIDIA has been the belle of the ball this year, with insanely powerful GPUs and accompanying features. But they quietly made three announcements today you might easily have missed.

And all of this is a big deal for people working on streaming and digital art and audiovisual performance.

Broadcast App which is very much not Zoom

It all starts with the NVIDIA Broadcast App. Like so many things in 2020 and in live video applications, blah blah, something about gamers (appropriately there’s some dude in I guess a gaming chair?) – but this isn’t actually just for gamers once you dive in.

https://www.nvidia.com/en-us/geforce/broadcasting/broadcast-app/

First, the bad news – that cool looking app with all its dynamic face tracking and background removal and so on does require a higher-end NVIDIA GPU. Think NVIDIA GeForce RTX 2060, Quadro RTX 3000, TITAN RTX or higher. (Note that what it doesn’t require is a beefy CPU, of course – and that stupidly cheap yet very speedy Ryzen from NVIDIA’s own rival AMD winds up being a great pairing. Who would have thought, combining AMD CPUs and NVIDIA GPUs would be the thing in 2020?)

But the good news is, this is a broadcast-quality tool that’s now available for peanuts. So forget all the absolutely terrible background replacement in Zoom calls you’ve been enduring. NVIDIA are using the muscle on their silicon to do the unthinkable, and use machine learning to basically replace expensive green screen-and-camera rigs.

Dorky video, but check the background replacement:

So yes, the hype around AI is all about replacing humans (which it does really badly though that apparently doesn’t stop people from abusing them in this way), or expanding surveillance (which is, you know, evil). But one more effective application is really doing what these machine learning algorithms were first built to do, processing tons of pixels in a way that allows them to see a little more in the way we do.

And wait, suddenly you buy a new GPU, and you get a green screen setup from a simple camera input and … no green screen.

It’s in your face

But there’s more. Since NVIDIA added an SDK, the folks at Notch are using this to do way cooler face tracking, allowing you to use the muscles in your face basically for puppetry – and then input your face into your powerful motion graphics environment.

And I mean, holy s***.

Full details on Notch’s Medium site:

NVIDIA Broadcast Engine coming to Notch

Why does it look so great? Well:

  • More features, detected via machine learning – “contour, face shape, lips, eyes, and eyelids, using up to 126 key points.”
  • A real-time 3D morphable model with 6000 polygons and 6 degrees of head movement (good, as I recently had to go to a physical therapist when COVID-19 initially murdered my spine)
  • Lower latencies versus the CPU, which added 2-3 frames; this is apparently sub-frame so … wow!

It’s coming in two weeks, so… check shipping times on those NVIDIA GPUs, probably.

It also does noise removal

AI in NVIDIA Broadcast Engine is also doing work on the audio side – here, with noise removal. So while everyone is obsessed with whether AI can write music (it can’t, also why would you want that?, but mainly, it really can’t), AI is actually doing something that really worked poorly in the past with conventional DSP techniques.

This is doubly interesting, as you get GPU-bound calculations in audio. That’s something that’s been talked about for a long time, and even surfaced in some experimental applications. But AI sound processing techniques are the first time we really see a reason you’d want to leave the CPU for the GPU.

NVIDIA Broadcast Noise Removal is in two places. XSplit have put it in their live streaming client (download):

XSplit Broadcaster integrates new NVIDIA Broadcast Engine Audio Effects SDK

A sign of things to come? NVIDIA inside an audio plug-in, XSplit.

But it’s also in a VST3 filter with VoiceFX, which you can use anywhere – so in your DAW (if it has VST3 support), in Premiere Pro, and in OBS Studio.

StreamFX is just generally awesome and open source, and lets you make all kinds of graphics candy that runs inside OBS Studio:

https://github.com/Xaymar/obs-StreamFX

Check everything it can do. I didn’t have time to dig up where exactly they’ve put that promising-sounding VST3 filter, but I hope to be back soon with an installation guide, once they – and I – are ready.

NDI runs on a GPU, and all rejoice

NDI is a means of running real-time video between applications and across local networks. And now screen capture and encoding is GPU accelerated.

This matters for basically everything streaming – as it means OBS Studio finally captures your screen with GPU acceleration.

I’m personally excited by this, as one way that OBS Studio worked with NVIDIA GPUs in the past was not at all, offering an exciting hardware-accelerated blank screen. (Yes, yes, I know there are workarounds, but this was generally a headache.) So here’s another one for me to hole up and explore.

It’s all in development, so watch here:

https://www.ndi.tv/nvidia

Winter is coming

Erm, yes, summer for you lovely southern hemisphere folks.

Anyway, this is all in beta, so don’t expect it to all work perfectly right away. But NVIDIA have more details, and if you are comfortable testing beta software, you can update to “the latest beta versions” (gulp), and for XSplit / Notch / VoiceFX, download NVIDIA’s SDK redistributable. If you’ve been messing around with AI, you already know the NVIDIA SDK Resources page. For everyone else – uh, welcome to the club.

Full details:

https://www.nvidia.com/en-us/geforce/news/nvidia-broadcast-engine-integrations/

By the way, you can help NVIDIA correct AI bias by submitting greenscreen clips:

https://broadcast.nvidia.com/en-us/feedback?sdk=greenscreen