A.I.! Good gawd y’all – what is it good for? Absolutely … upscaling, actually. Some of machine learning’s powers may prove to be simple but transformative.

And in fact, this “enhance” feature we always imagined from sci-fi becomes real. Just watch as a pioneering Lumiere Brothers film is transformed so it seems like something shot with money from the Polish government and screened at a big arty film festival, not 1896. It’s spooky.

https://youtu.be/3RYNThid23g

It’s the work of Denis Shiryaev. (If you speak Russian, you can also follow his Telegram channel.) Here’s the original source, which isn’t necessarily even a perfect archive:

It’s easy to see the possibilities here – this is a dream both for archivists and people wanting to economically and creatively push the boundaries of high-framerate and slow-motion footage. What’s remarkable is that there’s a workflow here you might use on your own computer.

And while there are legitimate fears of AI in black boxes controlled by states and large corporations, here the results are either open source or available commercially. There are two tools here.

Enlarging photos and videos is the work of a commercial tool, which promises 600% scaling improvements “while preserving image quality.”

https://topazlabs.com/gigapixel-ai/

It’s US$99.99, which seems well worth it for the quality payoff. (More for commercial licenses. There’s also a free trial available.) Uniquely, that tool also is optimized for Intel Core processors with Iris Plus, so you don’t need to fire up a specific GPU like the NVIDIA. They don’t say a lot about how it works, other than it’s a deep learning neural network.

We can guess, though. The trick is that machine learning trains on existing data of high-res images to allow mathematical prediction on lower-resolution images. There’s been copious documentation of AI-powered upscaling, and why it works mathematically better than traditional interpolation algorithms. (This video is an example.) Many of those used GANs (generative adverserial networks), though, and I think it’s a safe bet that Gigapixel is closer to this (also slightly implied by the language Gigapixel uses):

Deep learning based super resolution, without using a GAN [Towards data science]

Some more expert data scientists may be able to fill in details, but at least that article would get you started if you’re curious to roll your own solution for a custom solution. (Unless you’re handy with Intel optimization, it’s worth the hundred bucks, but for those of you who are advanced coders and data scientists, knock yourself out.)

The quality of motion may be just as important, and that side of this example is free. To increase the framerate, they employ a technique developed by an academic-private partnership (Google, University of California Merced, and Shanghai’s Jiao Tong University):

Depth-Aware Video Frame Interpolation

Short version – you combine some good old-fashioned optical flow prediction together with convolutional neural networks, and then use a depth map so that big objects moving through the frame don’t totally screw up the processing.

Result – freakin’ awesome slow mo go karts, that’s what! Go, math!

This also illustrates that automation isn’t necessarily the enemy. Remember watching huge lists of low-wage animators scroll past at the end of movies? That might well be something you want to automate (in-betweening) in favor of more-skilled design. Watch this:

A lot of the public misperception of AI is that it will make the animated movie, because technology is “always getting better” (which rather confuses Moore’s Law and the human brain – not related). It may be more accurate to say that these processes will excel at pushing the boundaries of some of our tech (like CCD sensors, which eventually run into the laws of physics). And they may well automate processes that were rote work to begin with, like in-betweening frames of animation, which is a tedious task that was already getting pushed to cheap labor markets.

I don’t want to wade into that, necessarily – animation isn’t my field, let alone labor practices. But suffice to say even a quick Google search will quickly come up with stories like this article on Filipino animators and low wages and poor conditions. Of course, the bad news is, just as those workers collectivize, AI could automate their job away entirely. But it might also mean a Filipino animation company would face a level playing field using this software with the companies that once hired them, only now with the ability to do actual creative work.

Anyway, that’s only animation; you can’t outsource your crappy video and photos, so it’s a moot point there.

Another common misconception – perhaps one even shared by some sloppy programmers – is that processes improve the more computational resources you throw at them. That’s not necessarily the case – objectively even not always the case. In any event, the fact that these work now, and in ways that are pleasing to the eye, means you don’t have to mess with ill-informed hypothetical futures.

I spotted this on the VJ Union Facebook group, where Sean Caruso suggests this workflow: since you can only use Topaz on sequences of images, you can import into After Effects and go on and use Twixtor Pro to double framerate, too. Of course, coders and people handy with tools like ffmpeg won’t need the Adobe subscription. (ffmpeg, not so much? There’s a CDM story for that, with some useful comment thread, too.)

Having blabbered on like this, I’m sure someone can now say something more intelligent or something I’ve missed – which I would welcome, fire away!

Now if you’ll excuse me, I want to escape to that 1896 train platform again. Ahhhh…