Brilliant but Difficult: Vinyl and Celluloid

There is a New Yorker cartoon showing two hipsters looking at a turntable captioned, “The two things that really drew me to vinyl were the expense and the inconvenience.” Indeed.

For all the vinyl fetishism these days – I blame you Wes Anderson – there is no getting around the problems with records as an electro mechanical audio playback medium. Beyond the classic analog issues of noise, distortion, and frequency response there is wow and flutter, mis tracking, groove damage, surface noise, crosstalk and a host of other problems begging for a better way.

But invest enough time, money, and effort to reduce these problems and the results can be magical. That’s because the dynamic range of vinyl exceeds 16-bit CDs, and the lack of sampling and quantizing has a purity that allows the emotion in the music to come through. Listening to records passes the “just one more track” at 3:00 AM test. By comparison, digital, especially compressed digital music like on Spotify, sounds good at first and then just gets fatiguing which makes one want to stop listening.

A great example: Listen to Talk Talk’s “Spirit of Eden” album on Vinyl and CD. Mark Hollis’ production technique required immense dynamic range because of the way he layered instruments together. On Vinyl the guitars come crashing through with massive intensity on the crescendos. On CD the guitars just get a bit louder and distorted.

The same can be said for shooting film. As electro optical mediums for capturing images go film is not exactly pristine. Dirt, scratches, gate jitter, emulsion damage, a limited exposure range and of course, not being able to see what has been captured until after development is archaic. Plus, the cost of shooting on film is astronomically more expensive than shooting digitally.

But like listening to records, there is a magic to the look of “analog” film. While it may have less dynamic range than digital the way film gently rolls off highlight and shadows, folding them back into themselves, is beautiful. By comparison, digital simply clips when it runs out of bits in the whites or blacks. And how one treats the highlights and darkest part of an image is the secret to making great pictures – ask any colorist.

There is a natural softness to film that is so flattering to shooting people. And grain! Real grain. Not the stuff being schemerred on top of digital these days, but real grain is beautiful. Film just naturally looks wonderful – digital is sharper and cleaner but needs to be made to look great. Which is why Oppenheimer, Poor Things, Killers of the Flower Moon, Maestro, Past Lives and The Wonderful Story of Henry Sugar were all shot on film.

Retour à l’analogique !

Digital Archiving

 

Archiving digital images and sound is an underexplored challenge. We all assume that our digital files will simply be available now and forever. Unfortunately, that’s far from true. Without a concerted preservation effort, today’s files will not be readable or recoverable in as little as 20 or 30 years.

Properly stored, 35mm motion picture film from 100 years ago can be easily projected or scanned, and the information retrieved with astonishing clarity. So-called safety film, which replaced the explosive Nitrate film in the 1920s, is “software and operating system agnostic” – just shine a light through it to recover the images. All that’s required to archive it is some shelf space in a temperature and humidity controlled storage room.

This is very much not the case with digital files. Chances are that 20 years from now a digital file will have self-destructed and/or not be readable. Even if the physical file storage media can last for a couple of decades – a questionable assumption for hard drives and SSD drives — the ability to mount the file system and read the data will almost certainly be lost. Linear Tape Open (LTO) cartridges, often touted as an archival medium, will not do either. LTO cartridges may be physically robust, but the tape decks themselves are not backwards compatible. They can only playback the current generation or one previous generation of cartridges. As the LTO format gets updated about every 18 months an LTO cartridge created more than three years ago cannot be played back on a current LTO tape drive.

There is also no guarantee the operating systems under which digital files are stored will be viable decades from now. Anyone who thinks that a drive formatted under current OS X, Linux or Windows operating systems will be mountable in 30 years should try opening a file stored under Digital Equipment Corporation VMS, Silicon Graphics Irix or Bell Labs Inferno. Good luck.

Is there a solution? Well…

Studios and major media corporations such as DreamWorks and HBO hire archivists who are constantly migrating their assets to the latest medium under the latest file systems. But for the rest of us, that’s just not practical.

A reasonable interim solution is to follow the “WCI rule of threes:” always have three copies of every valuable file at all times. For long-term archiving that means keeping two physical copies, ideally in different physical locations and one copy in the cloud. The cloud copy can be on a file sharing platform like Dropbox, or for long-term deep storage in a data storage center like Amazon S3. This isn’t ideal as the two physical copies may physically degrade and as discussed, be unmountable in the future. The cloud copy is dependent on a third-party vendor staying around. But if the physical copies and cloud based copy are checked once a year it’s a decent compromise – at least for now.

 

Shooting RED vs. Alexa

The 2018 Oscar-nominated films for best picture and best cinematography all had one thing in common: they were shot on film or with Arri Alexa cameras. Nary a RED to be seen. So why consider shooting RED vs. Alexa? The quick answer is for situations where resolution is more important than colorimetry shooting RED is currently the only option. The long answer:

As storage costs keep dropping and workflow speeds keep increasing, capturing and working with 8K images has become straightforward. A comparison of the open gate resolutions of the RED Weapon cameras (who chose that ridiculous name?) and the Arri Alexa cameras shows:

RED Weapon Monstro:   8,192 x 4,320
RED Weapon Helium:   8,192 x 4,320
Arri Alexa SXT:   3,168 x 1,778
Arri Alexa Mini:   3,424 x 2,202
Arri Alexa LF:   4,448 x 3,096
Arri Alexa 65:   6,560 x 3,100

Even including the Alexa 65, which is still a rare and expensive machine, the Alexa series can’t match RED in terms of capture resolution.

What demands capturing at 8K — or higher — resolution? Site specific ultra-high resolution / multi screen installations. E.g., the art films for the new National Museum in Qatar that WCI has been working on with the Doha Film Institute. The final delivery of these films must be as high 24,576 x 4,320 pixels. A single RED camera obviously can’t achieve that. But as the body size of the Helium sensor RED cameras is relatively small we were able to combine three of them on a single head to increase our capture resolution, as shown in the pic below.Helium sensor RED camera

Movies and television shows can also benefit greatly from 8K capture. The ability to fine tune framing in post is an essential modern tool. As is being able to stabilize shots. Capturing at 8K and delivering at 4K allows both of those to happen with no loss of resolution in the delivery. High resolution has obvious benefits for all VFX work from simple green screen composites to complex CGI integration with live action. And working with files at higher resolution than final delivery resolution adds increased flexibility to most workflows.

It’s also worth noting the difference between resolution and sharpness. Using 8K files doesn’t necessarily mean showing every pore on an actor’s skin. Quite the opposite, many DPs use diffusion filters or softer lenses, e.g., the lovely new Cooke S7 primes, when capturing at 8K. This can achieve a more flattering look while keeping all the advantages of high resolution.

So why not just shoot RED? Because all that resolution comes at a very steep price in colorimetry and signal-to-noise ratio. While the RED 8K cameras offer exceptional resolution in a small footprint, they appear to trade off these two attributes:

Colorimetry: skin tones shot on REDs look washed out, pasty and just, well, unattractive. That’s great for a movie like “Winter’s Bone” but doesn’t work if people need to look warm and beautiful. And while one can work really hard in a digital intermediate color correction session to make people shot on RED look better, Alexa files seem to naturally start from a place where skin tones and people look good. That said, there is still no comparison to how beautiful people look when captured on film.

Signal-to-noise: Those little bits of metallic silver create something quite special in film grain. “Dunkirk” and “The Florida Project” are just two of many recent examples. By comparison, RED sensor noise is very digital and unattractive. Have yet to see any example where it supported the look of images or a story.

Any discussion of RED vs. Alexa would be incomplete without acknowledging that neither of them is as beautiful as shooting on motion picture film. Unlike projection or post production where digital is demonstrably better, as of today there is still no digital motion picture camera that equals the look of capturing on film. The gentle way film rolls off the brightest and darkest parts of an image, the magic of real film grain and the beauty of film colorimetry have still not been equaled digitally. But the cost difference between shooting on film vs. digitally is staggering. In round numbers, at 24 f.p.s. 16 hours of 35mm 4-perf film stock and processing is about $150K vs. $1K for 16 hours of RAID drive space for ArriRaw open gate footage. Yikes.

In summary, as of April 2018 WCI recommends that for applications demanding the highest resolution, shoot RED. For all other projects, e.g., where people need to look beautiful, shoot Alexa. Looking forward, I’m hoping Arri continues to increase the resolution of their Alexa cameras and RED improves the colorimetry and noise performance of their cameras.

Working With John Sanborn

MEANDRES & MEDIA, L’CEUVRE DE JOHN SANBORN” was an exhibition in Paris (2106) and a book about the work of Video and Media artist John Sanborn curated by Stephen Sarrazin. These are my recollections of working with John in the 1980s, as published in the book.


New York City in 1982 was a very different place than it is today. The City hadn’t fully recovered from almost going bankrupt in 1975. Things were still rough around the edges: subway cars were covered in graffiti, Times Square was most certainly not Disneyland and SoHo was more Scorsese’s “After Hours” than shopping mall.

Lower Manhattan “pioneers” were able to get large loft spaces for very little money. Musicians, writers, sculptors, painters, video artists and creative engineers lived and worked in spitting distance of each other. The arrival of MTV, MIDI, CDs, the explosion of cable TV and the club scene (remember Danceteria?) created an enormous demand for visual music. As a result, artists of all disciplines and generations were drawn together in a heady mix of creativity and community.  Also, the City was awash in blow. That was the year I met John.

I had spent my college years at Rensselear Polytechnic Institute getting artists and engineers to collaborate. We built hardware, created video art and generally tried not to kill each other — all with mixed success. Upon graduating I made a beeline for Manhattan and went to work for VCA Teletronics, the first independent post production facility.  My day job was designing machines to electronically edit video but my weekends were spent “testing”  the equipment with video artists on personal projects.  If memory serves, John approached me about working with him and Kit Fitzgerald on a piece they were creating for Adrian Belew called “Big Electric Cat.” That seemed like a logical extension from the work at RPI, and so our partnership began.

Unlike today, getting access to video equipment in the 1980s was a very big deal. If you’ve never known a world where you couldn’t edit cat videos on your iPhone you might have trouble picturing this, but video editing suites required thousands of square feet of floor space and cost millions of dollars to build and operate.  John was willing to spend his weekends working – we did have the keys to all the toys – and I was delighted to have a new collaborator.  Over the course of the next eight years we ended up working on numerous artistic and commercial projects together.

Three things made working with John special: 1) he didn’t struggle making creative decisions. That was especially important because we were layering images in linear editing suites which did not have an “undo” button; 2) although not an engineer he grasped technical concepts very quickly; 3) Most importantly, John understood that you could be serious about your work but still maintain a rollicking sense of humor while working.  I can’t overstate how important that was and still is.

The process we used to create images was based on “discovering” visuals. We would take what were then state-of-the-art post production tools, e.g., analog switchers and digital video effects devices (DVEs) and feed them back into themselves. This digital feedback in an analog signal path produced unpredictable results. When we discovered things we liked we recorded them onto videotape. These recorded images would then be layered on top of each other – often as much as fifty or sixty analog generations down. We created a more detailed explanation of this process for a segment called “The Video Artist” on “Night Flight,” a 1980s TV series on the USA cable network covering downtown Manhattan art and culture. You can view it by clicking the photo of us below.

Our process of improvising, capturing and layering imagery is analogous to jazz. By contrast, the rigid formalism of computer graphics imagery is more like classical music. And we liked jazz. That said, John and I were also fans of the avant garde which led us to our next piece, “ACT III” set to the music of Philip Glass. Glass’ music and John’s energy were a good combination. ACT III managed to transcend the barriers of abstract video art and find widespread acceptance.

I’ll always be grateful to John for our next moment of cosmic synergy. One of my musical heroes was composer Robert Ashley. Unbeknownst to me, a few years earlier John had directed the pilot for “Perfect Lives (Private Parts)” Ashley’s brilliant opera for television. When John asked me if there was any music I’d like to work with for our next project I immediately replied, “Robert Ashley.” “Funny you should say that…”said John. After languishing for several years Britain’s Channel Four had just approved the funding to create all seven episodes with John directing. “Would I like to work on it with him?” he asked. Trick question?? Of all the work we did together ACT III and Prefect Lives are my favorites. And Bob Ashley went from hero to my friend and mentor until he passed away on March 3, 2014. Eternal thanks to John for that.

In addition to commercial projects we went on to collaborate on four more video art pieces: “Renaissance” (1984) for the Computer Museum in Boston, “Video Wallpaper” (1984) a 50 minute ambient background video for a distributor I’ve long since forgotten, “Luminaire” (1985) for Expo ’86 in Vancouver and “Infinite Escher” (1990) an early analog high definition work for Sony. We no longer had to steal weekend time to work on these, although we could only work the night shift. I have fond memories of watching John run around the facility shrieking jokes and doing shticks at 4:00 AM. And, fortunately, most everyone else there at that hour was amused as well.

Things are different today. John moved to Berkley. I’ve stayed in lower Manhattan and watched as artists got displaced by hedge fund kids. Sitting down at our respective Mac workstations we each have more computer power than filled all of Teletroncs (and then some). And I surely don’t miss the two hours I had to spend aligning all of the analog videotape machines and signal processors to get ready for an evening’s work. Much to be said for double clicking and having a project come up just as you left it. But there was an energy that came from working “in the studio” in general and with John specifically that I do miss. Skype just isn’t the same.

John Sanborn and Dean Winkler
Sanborn (right) and Winkler (left) in Edit III at VCA Teletronics, NYC 1984

Preserving Nam June Paik’s Work

Nam June Paik (July 20, 1932 – January 29, 2006) was a Korean American artist who is rightly considered to be the father of modern video art. Nam June was a hero, mentor and friend who taught me many things. Perhaps the most important being that creating art and keeping a sense of humor about it go hand-in-hand. (A philosophy embraced by the composer Robert Ashley, who is also sorely missed.)

From 1981 – 1997 Nam June worked with video artist and social activist Paul Garin. Paul took a gig as Nam June’s assistant and ended up being one of his most important collaborators, producing hundreds of works together. We basically gave Nam June and Paul a key to Post Perfect to work on art projects — one of the charms of working the night shift was running into them. The picture below is of Nam June (left) and Paul (right) having a video jam session in Post Perfect’s linear edit suite number two.

Unfortunately, many of Nam June’s pieces are falling into disrepair. This is particularly true for the multi-monitor sculptures, or “robots” as he called them, as they were created with vintage analog consumer television gear. These are failing and must either be repaired (very hard) or replaced with digital displays that maintain the look of the originals (even harder). Paul presented a paper on this topic in Seoul last week — click on the picture below to read the text of it. Bottom line: time is running out to preserve these magnificent works. If you’d like to help the restoration effort please contact Paul at: pg(at)freethe.net

Nam June Paik and Paul Garin
Nam June Paik (left) and Paul Garin (right) at Post Perfect, circa 1992

The First Non-Linear Edit System

In 1969 SMPTE released standard 12M – a specification for applying a universal time code to video. Assigning a unique number to every video frame was critical to the development of electronic editing as it enabled a list of all the edits in a program to be compiled. Thus was born the Edit Decision List or EDL. CBS and Memorex formed a company called CMX Systems to build editing systems using time code and EDLs. If you’ve never known a world where you couldn’t edit cat videos on your iPhone your reaction might be “meh.” But this was some radical engineering at the time.

In 1971 CMX released their first product: the CMX-600 light pen random access editor. Wildly ahead of its time, it stored monochrome video in analog format on Memorex computer drives and used a DEC PDP-11 mini-computer to control the system via a light pen interface. The system only held 30 minutes of low quality video, the disk drives took up a few hundred square feet of floor space and it cost the equivalent of three million bucks in 2015 dollars. But this revolutionary machine was the forerunner of all modern non-linear editing systems.

Of the first five systems CMX sold three went to CBS, one to CFI in LA and one to Teletronics in NYC. I had the privilege of sharing an office with the CMX-600 disk farm at my first full time job as an engineer at Teletronics. Typing this post on my Mac workstation, with its 82 Terabytes of attached storage, I can’t say I miss the big multi-disk Memorex platters each of which held only 5 minutes of video. But I do have a fondness for the CMX-600 and a great appreciation of the monumental effort it took to create it. And there’s much to be said for a heavily air conditioned machine room.

Thanks to Robert Lund for the pictures below. Lundo left Bell Labs to join Teletronics as one of the first digital engineers in post production. He maintained the CMX-600, wrote custom software for it and had the temerity to hire me in 1981 to design and build hardware for him.

The CMX-600
The CMX-600, as shown in the original CMX product brochure (1971)
Editing on a CMX-600
The real “Mad Men” edit a commercial at Teletronics using the CMX-600 (circa 1976)

On Analog Computing

Joost Rekveld wrote a fascinating and detailed blog post about analog computing. Worth a look just for the vintage big-iron eye candy.  Click the image below to hop over to it (please do come back when you’re done).

Analog Computer

How Much Resolution is Enough?

If you attended or read about this year’s Consumer Electronics Show it was hard to escape the hype about the new high resolution televisions. 4K! 8K! Unbelievable amounts of K! But how much display resolution does one actually need?

The short answer: it all depends on viewing distance from the screen.

The long answer: resolution is one of several factors that needs to be considered together when evaluating capture, post production and display technology. In much the way signal-to-noise ratio, frequency response and distortion are always looked at together when evaluating audio systems, resolution needs to be considered along with many other factors when evaluating image systems including: dynamic range, contrast, brightness, color gamut and frame rate. My friend Mark Schubin gave an excellent (as always) presentation about this in November, 2014 which is pasted below.

The research WCI did for the large scale immersive films the Doha Film Institute is creating showed that the key driver of how much resolution is enough is viewing distance from the screen. To wit, if you’re thinking about replacing your 50 inch TV with a 4K model and your couch is 150 inches back from the screen don’t bother. You won’t be able to perceive the difference in resolution. However, increasing the dynamic range of the image, as Dolby is proposing, increasing the contrast and the brightness of the display will have a very, very noticeable effect and will yield greater benefits than increasing resolution.

On the other hand, at very close distances, e.g., less than one screen height away, resolution does become critical.  Continuing the above example, if you like to watch your 50 inch TV from 40 inches away a 4K set will indeed change your life, although probably not as much as a new pair of glasses.

An unexpected corollary to this was that at very short viewing distances display resolution is actually more important than source resolution. We compared 2K and 4K projection screens viewed from ½ screen height distances with both 2K and 4K source material. What we found was that the perceived quality improvement going from a 2K to a 4K projector with 2K source material on both was greater than the perceived increase in a 4K projector with 2K source material to the same 4K projector with 4K source material.

Looking beyond display, there are, obviously, other reasons for capturing and posting at high resolution. Generally speaking, more resolution means more flexibility, e.g., the ability to zoom and reposition and image in post, the ability to add stabilization, the ability to reproduce material, etc. But this is not free. Data space and processing time increase exponentially with resolution. So if one is creating a webisode for YouTube perhaps it best not to shoot 100 hours of 6K Red Dragon files.

Winkler Consulting Inc.