SIXTEENmm

1743 films and counting...

Filmscope Progress - Blog

2020-03-18

We introduced Filmscope back in December.

The basic idea of Filmscope is to recreate a badly-damaged film, with acceptable quality loss, to make it more watchable. A creative process, rather than a restorative one.

The first version wasn't fantastic. We ran it against Cruel, Cruel Love.

Cruel, Cruel Love

Watchable, but really not great. The style was extremely distracting.

The use of heavy lines caused by some of the reshaping of the image was extremely painful to see. Goodbye Charlie Chaplin's famous mustache. Now, it has eaten his face.

Filmscope 2.0

However, we didn't completely throw out the process.

The original process looked something like:

Where the filter for rendering frames was this tiny bit of ImageMagick:

convert "$line" \ \( -clone 0 -blur 2x1 \) \ \( -clone 0 -fill black -colorize 100 \) \ \( -clone 0 -define convolve:scale='!' \ -define morphology:compose=Lighten \ -morphology Convolve 'Sobel:>' \ -negate -evaluate pow 5 -negate -level 30x100% \) \ -delete 0 -compose over -composite \ -colorspace RGB -resize 200% -colorspace sRGB \ "$tmp"/out"$(basename "$line")"

Complicated, but basically attempting to highlight key areas whilst using a blur to force the mind to fill in the gaps between frames.

As we've already said, it wasn't completely successful.

We were stuck, until we ran across this maths paper.

A way of using k-means clustering with a way of automatically being able to choose a sensible k value opened the doorway to be able to quickly and effectively reduce the overall artefacts in an image, whilst maintaining the overall detail.

Our new process became:

Which translates mostly to this bit of code:

``` process_frame() { line="$1" tmp="$2"

  # Fix size
  width="$(identify -ping -format "%w" "$line")"
  if [ "$width" -lt 720 ]; then
    mogrify -verbose -magnify "$line"
  fi
  width="$(identify -ping -format "%w" "$line")"
  if [ "$width" -lt 720 ]; then
    mogrify -verbose -magnify "$line"
  fi

  # Simplify
  python simplify.py "$line" "$tmp"/tmp"$(basename "$line")"
  # Merge upscaled and cartoon'd to make sure detail is preserved
  composite -verbose -blend 70 "$line" "$tmp"/tmp"$(basename "$line")" "$tmp"/out"$(basename "$line")"
  # Remove image tint and make the colours richer
  mogrify -verbose -type Grayscale -normalize "$tmp"/out"$(basename "$line")"

  # Remove stuff we don't need
  rm "$line"
  rm "$tmp"/tmp"$(basename "$line")"
}

```

We've thrown out the colourisation step, mostly because of time constraints.

This process may not seem to be doing much, but at 10-20 seconds per frame, it can actually take an incredible amount of time to re-render a single frame, even with the steps we've take to parralel process as many frames as the host system can handle.

We've rendered exactly one film this way, The Great Train Robbery.

The Great Train Robbery

The result is fairly good. The quality increase has a few unwanted side-effects, like the watcher becoming aware that they're watching something from before the invention of camera-steadying techniques, but for the most part is fairly impressive.

Time is the killer that makes the process harder to run across most of our damaged films.

This copy of the Great Train Robbery is just over 14 minutes long, however it took around a week to render it.

With some rough estimates from other experiments that didn't quite make it to the final rendering stage, we expect a full hour and a half movie to take somewhere in the range of 2-3 months to render, which makes renting hardware to speed this up rather expensive.

Future of Filmscope

We're taking two approaches at the same time, right now.


Continue reading...