SIXTEENmm

1683 films and counting...

Creating Something From Nothing - Blog

2020-06-11

One of the more frustrating aspects of restoring old films, is when you don't have access to the original film itself.

There are many films that exist only as a VHS copy, that has been poorly neglected, or worse. These films lack the resolution to be watchable, at times.

The modern world is no longer accepting of a video sized appropriately for download onto a 480p screen of the past.

There is a lot of exciting research that goes into approaches that could be used to overcome this deficit.

Super Resolution is one machine learning approach, that utilises the memory and processive intensive Generative Adversarial Networks. However, the power needed to transform a single still of video makes it an expensive approach as well.

In the past, we tried a simpler attempt - filmtrace. Results were mixed, at best. Whilst it did preserve some quality, the artefacts it introduced made it intolerable to most audiences.

A Filmtrace Still

However, it did give us the groundwork for what we're happy to say is going into the production pipeline today.

Compared to the incredibly naive way that filmtrace attempted to generate contours and infill them, what we're doing today is rocket science. However, the approach is one that has been used for varying other purposes for quite some time, allowing the methods used to be somewhat optimised over the years.

The basic idea is the same - use quantization (a way of reducing colours present), to generate a canvas that can be mathematically resized without a loss of quality.

However, the results are startlingly different.

A new upscaled colour still

On the left you can see the original image, at it's original size. It's terrible quality, with washed out colours and encoding artefacts. This is on purpose, to see how well the worst can be handled by the approach.

The effect does lose something. It is almost like rotoscoping in the way it approaches a coloured image, which for our purpose - watching it, is fine.

However, when you hand the same process a black and white still, it is far more difficult to tell that anything at all has happened.

A new upscaled black and white still

There are some differences. There's a loss of overall sharpness, and the brightness is a little less. There are also some very hard to see artefacts.

However, the overall impression is that the size has changed, but the subject matter hasn't - despite our algorithm having to "fill in the holes" from the data.

In effect, the algorithm has created new data from the data it examines during the process.

Creating something, from nothing.


Continue reading...