Shooting RED vs. Alexa                                                                       April 2018


The 2018 Oscar-nominated films for best picture and best cinematography all had one thing in common: they were shot on film or with Arri Alexa cameras. Nary a RED to be seen. So why consider shooting RED vs. Alexa? The quick answer is for situations where resolution is more important than colorimetry shooting RED is currently the only option. The long answer:


As storage costs keep dropping and workflow speeds keep increasing, capturing and working with 8K images has become straightforward. A comparison of the open gate resolutions of the RED Weapon cameras (who chose that ridiculous name?) and the Arri Alexa cameras shows:


                                          RED Weapon Monstro:   8,192 x 4,320

                                           RED Weapon Helium:   8,192 x 4,320

                                                    Arri Alexa SXT:   3,168 x 1,778

                                                   Arri Alexa Mini:   3,424 x 2,202

                                                      Arri Alexa LF:   4,448 x 3,096

                                                     Arri Alexa 65:   6,560 x 3,100


Even including the Alexa 65, which is still a rare and expensive machine, the Alexa series can’t match RED in terms of capture resolution.


What demands capturing at 8K — or higher — resolution? Site specific ultra-high resolution / multi screen installations. E.g., the art films for the new National Museum in Qatar that WCI has been working on with the Doha Film Institute. The final delivery of these films must be as high 24,576 x 4,320 pixels. A single RED camera obviously can’t achieve that. But as the body size of the Helium sensor RED cameras is relatively small we were able to combine three of them on a single head to increase our capture resolution, as shown in the pic below.



























Movies and television shows can also benefit greatly from 8K capture.  The ability to fine tune framing in post is an essential modern tool. As is being able to stabilize shots. Capturing at 8K and delivering at 4K allows both of those to happen with no loss of resolution in the delivery. High resolution has obvious benefits for all VFX work from simple green screen composites to complex CGI integration with live action. And working with files at higher resolution than final delivery resolution adds increased flexibility to most workflows.


It’s also worth noting the difference between resolution and sharpness. Using 8K files doesn’t necessarily mean showing every pore on an actor’s skin. Quite the opposite, many DPs use diffusion filters or softer lenses, e.g., the lovely new Cooke S7 primes, when capturing at 8K. This can achieve a more flattering look while keeping all the advantages of high resolution.


So why not just shoot RED? Because all that resolution comes at a very steep price in colorimetry and signal-to-noise ratio. While the RED 8K cameras offer exceptional resolution in a small footprint, they appear to trade off these two attributes:


Colorimetry: skin tones shot on REDs look washed out, pasty and just, well, unattractive. That's great for a movie like "Winter's Bone" but doesn't work if people need to look warm and beautiful.  And while one can work really hard in a digital intermediate color correction session to make people shot on RED look better, Alexa files seem to naturally start from a place where skin tones and people look good. That said, there is still no comparison to how beautiful people look when captured on film.


Signal-to-noise: Those little bits of metallic silver create something quite special in film grain. "Dunkirk" and "The Florida Project" are just two of many recent examples. By comparison, RED sensor noise is very digital and unattractive. Have yet to see any example where it supported the look of images or a story.


Any discussion of RED vs. Alexa would be incomplete without acknowledging that neither of them is as beautiful as shooting on motion picture film. Unlike projection or post production where digital is demonstrably better, as of today there is still no digital motion picture camera that equals the look of capturing on film. The gentle way film rolls off the brightest and darkest parts of an image, the magic of real film grain and the beauty of film colorimetry have still not been equaled digitally. But the cost difference between shooting on film vs. digitally is staggering. In round numbers, at 24 f.p.s. 16 hours of 35mm 4-perf film stock and processing is about $150K vs. $1K for 16 hours of RAID drive space for ArriRaw open gate footage. Yikes.


In summary, as of April 2018 WCI recommends that for applications demanding the highest resolution, shoot RED. For all other projects, e.g., where people need to look beautiful, shoot Alexa. Looking forward, I’m hoping Arri continues to increase the resolution of their Alexa cameras and RED improves the colorimetry and noise performance of their cameras.


Working With John Sanborn                                                         November 2016


"MEANDRES & MEDIA, L'CEUVRE DE JOHN SANBORN" was an exhibition in Paris (2106) and a book about the work of Video and Media artist John Sanborn curated by Stephen Sarrazin. These are my recollections of working with John in the 1980s, as published in the book.



New York City in 1982 was a very different place than NYC today.  Lower Manhattan “pioneers” were able to get large loft spaces for very little money; MTV, MIDI, CDs, the explosion of cable TV and the club scene (remember Danceteria?)  were about to drive demand for visual music through the roof.  And there was a little bit of danger and an electric energy in the air. As a result, artists of all disciplines and generations were drawn together in a heady mix of creativity and community.  Also, the city was awash in blow. That was the year I met John.


I had spent my college years at Rensselear Polytechnic Institute working on getting artists and engineers to collaborate. We built hardware, created video art and generally tried not to kill each other -- all with mixed success. Upon graduating I made a beeline for NYC and went to work for VCA Teletronics, the first independent post production facility.  My day job was designing machines to electronically edit video but my weekends were spent “testing”  the equipment with video artists on personal projects.  If memory serves, John approached me about working with him and Kit Fitzgerald on a piece they were creating for Adrian Belew called “Big Electric Cat.” That seemed like a logical extension from the work at RPI, and so our partnership began.


Unlike today, getting access to video equipment back then was a big deal. If you’ve never known a world where you couldn’t edit cat videos on your iPhone you might have trouble picturing this, but video edit suites took up lots of space and cost millions of dollars to build and operate.  John was willing to spend his weekends working – we did have the keys to all the toys – and I was delighted to have a new collaborator.  Over the course of the next eight years we ended up working on numerous artistic and commercial projects together.


Three things made working with John special: 1.) he didn’t struggle making creative decisions. That was especially important as we were layering images in a linear editing suite without an “undo” button. 2.) although not an engineer he grasped technical concepts very quickly.  And most important, 3.) John understood that you could be serious about your work but still maintain a rollicking sense of humor while working.  I can’t overstate how important that was and still is.


The process we used to create images was based on “discovering” visuals. We would take what were then state-of-the-art post production tools, e.g., analog switchers and digital video effects devices (DVEs) and feed them back into themselves. This digital feedback in an analog signal path produced unpredictable results. When we discovered things we liked we recorded them onto videotape.  These recorded images would then be layered on top of each other – often as much as fifty or sixty analog generations down.  We created a more detailed explanation of this process for a segment called “The Video Artist” on “Night Flight,” a 1980s TV series on the USA cable network covering downtown Manhattan art and culture.  You can view it by clicking the photo of us below.


Our process of improvising, capturing and layering imagery is analogous to jazz. By contrast, the rigid formalism of computer graphics imagery is more like classical music. And we liked jazz. That said, John and I were also fans of the avant garde which led us to our next piece, “ACT III” set to the music of Philip Glass.  Glass’ music and John’s energy were a good combination.  ACT III managed to transcend the barriers of abstract video art and find widespread acceptance.


I’ll always be grateful to John for our next moment of cosmic synergy. One of my musical heroes was composer Robert Ashley.  Unbeknownst to me, a few years earlier John had directed the pilot for “Perfect Lives (Private Parts)” Ashley’s brilliant opera for television. When John asked me if there was any music I’d like to work with for our next project I immediately replied, “Robert Ashley.” “Funny you should say that…”said John. After languishing for several years Britain’s Channel Four had just approved the funding to create all seven episodes with John directing. “Would I like to work on it with him?” he asked. Trick question?? Of all the work we did together ACT III and Prefect Lives are my favorites. And Bob Ashley went from hero to my friend and mentor until he passed away on March 3, 2014. Eternal thanks to John for that.


In addition to commercial projects we went on to collaborate on four more video art pieces: “Renaissance” (1984) for the Computer Museum in Boston,  “Video Wallpaper” (1984) a 50 minute ambient background video for a distributor I’ve long since forgotten, “Luminaire” (1985) for Expo ’86 in Vancouver and “Infinite Escher” (1990) an early analog high definition work for Sony.  We no longer had to steal weekend time to work on these, although we could only work the night shift. I have fond memories of watching John run around the facility shrieking jokes and doing shticks at 4:00 AM.  And, fortunately, most everyone else there at that hour was amused as well.


Things are different today. John moved to Berkley. I’ve stayed in lower Manhattan and watched as artists got displaced by hedge fund kids. Sitting down at our respective Mac workstations we each have more computer power than filled all of Teletroncs (and then some).  And I surely don’t miss the two hours I had to spend aligning all of the analog videotape machines and signal processors to get ready for an evening’s work.  Much to be said for double clicking and having a project come up just as you left it. But there was an energy that came from working “in the studio” in general and with John specifically that I do miss.  Skype just isn’t the same.

Sanborn (right) and Winkler (left) in Edit III at VCA Teletronics, NYC 1984

Preserving Nam June Paik's Work                                                   February 2016


Nam June Paik (July 20, 1932 - January 29, 2006) was a Korean American artist who is rightly considered to be the father of modern video art. Nam June was a hero, mentor and friend who taught me many things.  Perhaps the most important being that creating art and keeping a sense of humor about it go hand-in-hand. (A philosophy embraced by the composer Robert Ashley, who is also sorely missed.)


From 1981 - 1997 Nam June worked with video artist and social activist Paul Garin. Paul took a gig as Nam June's assistant and ended up being one of his most important collaborators, producing hundreds of works together. We basically gave Nam June and Paul a key to Post Perfect to work on art projects -- one of the charms of working the night shift was running into them. The picture below is of Nam June (left) and Paul (right) having a video jam session in Post Perfect's linear edit suite number two.


Unfortunately, many of Nam June's pieces are falling into disrepair. This is particularly true for the multi-monitor sculptures, or "robots" as he called them, as they were created with vintage analog consumer television gear. These are failing and must either be repaired (very hard) or replaced with digital displays that maintain the look of the originals (even harder). Paul presented a paper on this topic in Seoul last week -- click on the picture below to read the text of it. Bottom line: time is running out to preserve these magnificent works. If you'd like to help the restoration effort please contact Paul at: pg(at)








Nam June Paik and Paul Garin at Post Perfect, circa 1992

The First Non-Linear Edit System                                                 September 2015


In 1969 SMPTE released standard 12M – a specification for applying a universal time code to video. Assigning a unique number to every video frame was critical to the development of electronic editing as it enabled a list of all the edits in a program to be compiled. Thus was born the Edit Decision List or EDL. CBS and Memorex formed a company called CMX Systems to build editing systems using time code and EDLs. If you’ve never known a world where you couldn’t edit cat videos on your iPhone your reaction might be “meh.” But this was some radical engineering at the time.


In 1971 CMX released their first product: the CMX-600 light pen random access editor. Wildly ahead of its time, it stored monochrome video in analog format on Memorex computer drives and used a DEC PDP-11 mini-computer to control the system via a light pen interface. The system only held 30 minutes of low quality video, the disk drives took up a few hundred square feet of floor space and it cost the equivalent of three million bucks in 2015 dollars. But this revolutionary machine was the forerunner of all modern non-linear editing systems.


Of the first five systems CMX sold three went to CBS, one to CFI in LA and one to Teletronics in NYC. I had the privilege of sharing an office with the CMX-600 disk farm at my first full time job as an engineer at Teletronics. Typing this post on my Mac workstation, with its 82 Terabytes of attached storage, I can’t say I miss the big multi-disk Memorex platters each of which held only 5 minutes of video. But I do have a fondness for the CMX-600 and a great appreciation of the monumental effort it took to create it. And there’s much to be said for a heavily air conditioned machine room.


Thanks to Robert Lund for the pictures below. Lundo left Bell Labs to join Teletronics as one of the first digital engineers in post production. He maintained the CMX-600, wrote custom software for it and had the temerity to hire me in 1981 to design and build hardware for him.




The CMX-600, as shown in the original CMX product brochure (1971)

The real "Mad Men" edit a commercial at Teletronics using the CMX-600 (circa 1976)

On Analog Computing                                                                          June 2015             

Joost Rekveld wrote a fascinating and detailed blog post about analog computing. Worth a look just for the vintage big-iron eye candy.  Click the image below to hop over to it (please do come back when you're done).

How Much Resolution is Enough?                                                  February 2015             

If you attended or read about this year’s Consumer Electronics Show it was hard to escape the hype about the new high resolution televisions. 4K! 8K! Unbelievable amounts of K! But how much display resolution does one actually need?


The short answer: it all depends on viewing distance from the screen.


The long answer: resolution is one of several factors that needs to be considered together when evaluating capture, post production and display technology. In much the way signal-to-noise ratio, frequency response and distortion are always looked at together when evaluating audio systems, resolution needs to be considered along with many other factors when evaluating image systems including: dynamic range, contrast, brightness, color gamut and frame rate. My friend Mark Schubin gave an excellent (as always) presentation about this in November, 2014 which is pasted below.


The research WCI did for the large scale immersive films the Doha Film Institute is creating showed that the key driver of how much resolution is enough is viewing distance from the screen. To wit, if you’re thinking about replacing your 50 inch TV with a 4K model and your couch is 150 inches back from the screen don’t bother. You won’t be able to perceive the difference in resolution. However, increasing the dynamic range of the image, as Dolby is proposing, increasing the contrast and the brightness of the display will have a very, very noticeable effect and will yield greater benefits than increasing resolution.


On the other hand, at very close distances, e.g., less than one screen height away, resolution does become critical.  Continuing the above example, if you like to watch your 50 inch TV from 40 inches away a 4K set will indeed change your life, although probably not as much as a new pair of glasses.


An unexpected corollary to this was that at very short viewing distances display resolution is actually more important than source resolution. We compared 2K and 4K projection screens viewed from ½ screen height distances with both 2K and 4K source material. What we found was that the perceived quality improvement going from a 2K to a 4K projector with 2K source material on both was greater than the perceived increase in a 4K projector with 2K source material to the same 4K projector with 4K source material.


Looking beyond display, there are, obviously, other reasons for capturing and posting at high resolution. Generally speaking, more resolution means more flexibility, e.g., the ability to zoom and reposition and image in post, the ability to add stabilization, the ability to reproduce material, etc. But this is not free. Data space and processing time increase exponentially with resolution. So if one is creating a webisode for YouTube perhaps it best not to shoot 100 hours of 6K Red Dragon files.












Video Facility Migration to an IT Based Infrastructure                     February 2015


Video facilities have always used specialized, expensive hardware to interconnect equipment. Beginning with equalized coaxial cable runs required for NTSC or PAL interconnections, to multi-wire analog component, to parallel digital  (which I seriously don’t miss – remember white video speckles?) to today’s serial digital coaxial interconnections the professional video environment has always required dedicated interconnection schemes.  These required specialized hardware and were very format specific, i.e., each signal path could only carry one type of signal to one place.


The evolution and ubiquity of computer networking is about to change this. As information technology continues to improve it’s able to handle more complex signals with higher bandwidth and strict latency requirements. Why build expensive, dedicated signal paths if “generic” Ethernet switches can handle video interconnections with format and routing flexibility?


Unfortunately, we’re still a few years away from this being practical. But it is the direction technology is heading in. The January SMPTE NYC Chapter meeting was devoted to this subject, a recoding of which is pasted below.


Ironically, the world of video projection is heading – with good reason – in the other direction. All professional projectors now offer the option of having coaxial serial digital input connections. And while there may be a next generation of display port technology that ultimately replaces it, for now, we believe serial digital interconnection is the highest quality, most reliable method of feeding projectors. By contrast, if someone is setting up a projection system for you – particularly if it involves multiple projectors – and they want to use computer DVI interconnections run away. Far, far away.