Introducing Filmscope - Blog2019-12-03
What is it?
In the past, we pioneered a new recovery technique, by creating filmTrace. This allowed us to recreate old films in a more-watchable way, though less accurate to the creator's original vision.
filmTrace worked by slicing each frame into coloured sections and converting those to vectors, resizing them, and recompiling into a video file. (There's more detail to it than that, but that's the gist of it).
However, filmTrace had some major drawbacks:
It creates a lot of artefacts. Videos created with it seem to have a spider-web like effect tracing them.
It is incredibly slow. We extensively used multiprocessing and multithreading, and short 20 minute films were still three to six weeks to render.
Vectors mean detail loss. A lot of detail loss. Which meant the technique wasn't suitable for some scenes in some films - actors could disappear into the background if there wasn't a high enough contrast for the technique.
This meant that though filmTrace was new, and looked fairly good, it wasn't "great".
Filmscope is the successor to filmTrace.
Rather than utilising vectors, it is much, much less "cutting edge". (Whilst filmTrace isn't a fundamentally flawed approach, it requires a decade or so of research before it can become what we want it to be).
Filmscope works on an incredibly simple image pipeline:
Colorize -> Filter -> Render
The colorization technique is a machine-learning one, that you can already see utilised as the main component of our SIXTEENmm Exlusives. We're hoping that Filmscope will allow us to expand that category with films that are usually too badly damaged to be watched.
What does it look like?
The filtering stage is probably the key component to making the whole thing work.
There's nothing that speaks louder than words.
The filter applies a series of effects to create our finally slightly smudged, slightly outlined look. We intentionally lose detail in an attempt to trick the viewer's mind into filling the details in for them.
In a sense, this can't be thought of as a "recovery" technique - we aren't recovering anything. In fact, we're intentionally damaging the film.
Yet, the result is an overall impression of improved quality.
When can I see more?
Filmscope is being applied to a couple short films at the moment, but, the process is slow. It builds on our colorization process that tends to take 3-10 days per feature-length film.
A rough estimate of things as they stand suggests that Filmscope takes around about twice as long as the colourization process to complete.
It is still significantly faster than filmTrace was, however.
filmTrace was open-sourced. You can get your own copy, and run it yourself.
Filmscope isn't open source, yet.
The key word there is yet.
The reason it hasn't yet been open-sourced is the reliance on our colourization technique.
The reason is technical, not political.
The colourization technique currently uses:
A patched Python
Two patched libraries (where upstream hasn't accepted vital bug fixes)
Some manual post-processing
As Filmscope can't be simply dropped in, we're working to clean up the code some before we release it.
- 2020-09-10 Experimental Interface
- 2020-06-26 Gunsmith Hits HD
- 2020-06-11 Creating Something From Nothing
- 2020-03-18 Filmscope Progress
- 2020-03-10 2019 Releases
- 2020-03-05 Downtime Postmortem
- 2020-02-12 Temporary Signup Problems
- 2020-02-12 Statistics
- 2020-02-01 Search Regression
- 2020-02-01 High Load DDoS Attack
- 2020-01-30 The Phantom reCreeps
- 2020-01-25 Simple is Best
- 2020-01-06 New Features
- 2020-01-04 Displaying Credit
- 2019-12-29 Performance Enhancements
- 2019-12-18 Experimental Rendering
- 2019-12-10 MPV Support
- 2019-12-03 Introducing Filmscope
- 2019-11-21 DDoS Attack
- 2019-11-20 Security Report
- 2019-11-20 Report
- 2019-10-21 The World of Preservation
- 2019-10-15 Endless Battle For Quality
- 2019-10-06 Giving Back
- 2019-10-02 What's in a Cookie?
- 2019-10-01 PGP
- 2019-09-28 SIXTEENmm