If you really wanted to make a 4K end-to-end movie, what would that entail?  

While myself and my team have been working with 4K material for close to 4 years, we recently got a chance to perform the mastering of our first 4K internationally delivered feature film; The Girl with the Dragon Tattoo. 

In a way, it was really the summation of 4 and 1/2 years of planning that led to the 4K execution of this film and I am confident that enough audiences will feel the benefits of 4K to warrant a rapid international expansion.  But as people all over the world begin to plan for 4K, this post is meant to discuss some of the technical and creative challenges we faced that may help people cultivate sound workflows so that 4K technology itself doesn’t get in the way of the creative process.  

First and foremost, I must thank the director, David Fincher, director of photography, Jeff Cronenweth, post supervisor, Peter Mavromates and assistant editor, Tyler Nelson for allowing my company and my team to collaborate with them on The Girl with the Dragon Tattoo. These individuals are truly masters of their craft and I believe they are sincerely pushing art and technology in a direction that needs leadership as well as fearless augmentation.  Regardless of how you feel about the past few movies this team has made, if you are going to make a 4K movie going forward, simply doing what David does is a great place to start.

The Girl with the Dragon Tattoo (GDT) is an end-to-end file-based feature film that represents much of the greatest technology available to us at this time.  This includes cameras, codecs, color science, software, hardware, projection and distribution techniques that I am confident have never been used altogether, and at this level and at this speed.  A good example of this is that when we were finishing The Social Network in September of 2010, GDT began shooting overseas in Sweden.  At that time, the newest RED camera, EPIC, had not been fully completed and was not ready for use as principle photography began on GDT.  By December, we started using the first EPIC’s on SONY’s THE AMAZING SPIDERMAN (3D) which planned to shoot nearly 100% in Los Angeles.  With EPIC being “battle-tested” on SPIDERMAN, development on the camera and its stability took place mostly in Los Angeles while GDT shot on the RED One MX camera.  Due to this scheduling, approximately 2/3rds of GDT was photographed using the RED ONE MX camera and 1/3rd captured on EPIC after camera builds matured.

There are scores of incredible moments in GDT that I believe audiences as well as filmmakers are going to be talking about for some time.  David’s talent has a way of rubbing off on people who admire his work and this film is full of these moments. However I wish to highlight some noteworthy components which fashioned a technical and creative blend that I believe all filmmakers should consider…or at least aware of.


GDT is approximately 230,000 frames long.  Due to the amount of visual effects in this film and timeline for editorial, vfx, conform and DI, the film was debayered to 10bit DPX files in 4K and 5K respectively in a 2:1 aspect ratio.  RED MX files came in at 4352×2176 and EPIC files came in at 5120×2560.  These files averaged out to be approximately 45MBs per frame.  For those of you doing the math, this comes to a little over 1GB per second of data.  It also means that much of the  DI was done at 5K, not 4K.  That’s roughly 33% larger than 4K.  I was recently asked in an interview “What are 3 things people should be concerned about when preparing for a 4K future?”  The answer is simple:

1. Playback

2. Playback

3. Playback

Most feature releases with heavy visual effects pipelines are going to need to do everything uncompressed.   This is not the only choice people have, but it is the best choice when dealing with a 50%+ VFX ratio.  It’s also an ideal way to work when there are numerous parts of the process being worked on simultaneously.  But many people get concerned they will need a lot of space for 4K, which isn’t necessarily the case.  With files that exceed 1GB per second, it’s not all about capacity.  Today’s market for a single gigabyte of storage space is around $0.20 USD.  So it is likely that many people have enough storage to easily hold a 4K movie in its uncompressed state.  In the case of GDT, the uncompressed elements were approximately 55 terabytes of total storage.  On the whole, that’s not that much storage-probably only around $25,000 worth of actual drives.  But probably what many will need to consider is the speed in which these drives will need to play back reliably.  At 1GB per second, drives need to be configured in two ways:

1. PLAYBACK DRIVES need to be optimized for a minimum of 1.5 gigabytes per second sustained playback per stream of playback.  Drives will need to be raid protected (which slows them down) and need to be large enough that no more than 60% of them are full at any one time (or they slow down again).  Plus, when dealing with the scale and schedule of a 4K DI, drives need to be configured to play more than one stream or version of the film simultaneously.

2. SHUTTLE DRIVES need to be optimized for a minimum of 500 megabytes per second sustained transfer rates.  After a DI is complete, there are many agencies that need copies of the finished files which need to be delivered in a timely manner.  FireWire or eSata are not possible for use of transferring because their bandwidth limit is far slower than considered acceptable (eSata tops-out around 300MB/s or 3x real time).  Most of our transfer times on projects are scheduled to meet REAL TIME requirements.  However with today’s drive technology, a series of small portable disks cannot currently achieve 1GB per second, so we have to settle for 500+ megabytes per second, which is as good as we can do right now.  This means transfer times are approximately 2x real time, which is slow, but manageable.  Li and the GDT team worked with MAXX Digital who helped optimize small shuttle SAS drives we call “shoeboxes” that enable us to move data at around 600 megabytes per second.  Nearly 2/3rds real time, these small shoeboxes were used to move data to and from Light Iron as well as various other vendors dealing with the film on a 1-reel-per-shoebox configuration.  This meant reels in their various stages could be managed in smaller, self contained volumes which made things a bit easier to track and manage without too much waiting time.


Putting this into practice, as we were getting down to the final push, it was common to have 2 reels of the film actively playing back, 1 reel of the an output being QC’d and another reel being transferred.  This means our collective network was peaking around 4 gigabytes per second.  Li post producer, Katie Fellion prepared the facility ahead of time by implementing techniques with CTO Chris Peariso so that 4 gigabytes per second would be achievable.  My advice to the community is to perform benchmark tests well ahead of time so that the grading, QC and transferring are not affected in a network “tug-of-war” so that each of these steps can be executed without inhibiting the work in the room next door.

When most films are in DI, it is common to complete each stage and move the film from point A to B to C.  With GDT, we were aware ahead of time the delivery might not allow for such time.  So as the film was being finished, we created an assembly line-much like a car assembly-to which finished DCI P3 reels would be output, converted through 32-vertices cubes for film record, QC’d and transferred all at the same time.  With a 9 reel film, a common step-by-step example of this data assembly line looked something like this, all happening at once:

• Reel 5 in the color assist bay with Monique Eissing being prepared for the final color pass (Theater 3 using Quantel Pablo #2)

• Reel 4 undergoing the final color pass in the premiere color bay with the client by Ian Vertovec (Theater 1 using Quantel Pablo #1)

• Reel 3 being converted from DCI P3 to film log (using a 12-core MacPro)

• Reel 2 being transferred to a shuttle shoebox drive (using a 12-core MacPro)

• Reel 1 being QC’d for the conversion from P3 to film log (Theater 2 using DVS Cliptster)

As the world continues to become more and more comfortable with 4K, post production teams will need to not only increase the capacity of their networks, but more importantly the bandwidth that it shares amongst users.  BlackMagic makes a great free speed test tool that helped us evaluate your system performances and address potential bottle-necks in the process.  As 4K becomes more and more routine for people, I recommend building an assembly-like plan and bench-marking all of your I/O speeds in each phase of the process.  This will help you find cracks in the system and address them instead of investing in drives that you don’t need.




On The Social Network, David utilized a technique that allowed him ample choices for reframing and stabilization by capturing 4K and 5K images with a 10% look-around pad that was pre-framed in camera.  In the case of GDT, because EPIC was used, the look-around image could be increased to roughly a 20% pad.  In the past when shooting film, it was common for people to frame differently from what the viewfinder or gate was photographing on the original negative (hence one of the needs for shooting framing charts).  With HD video cameras, there is not enough resolution to accomplish this and cameras typically displayed what they were recording with no look-arond area or padding.  The result was more of a “WYSIWYG” (what you see is what you get) in terms of limited framing and limited resolution.  With EPIC, I believe David’s technique of a 20% look-around is something filmmakers should consider on all projects.  The ability to take advantage of ample look-around space becomes a key component in reframing and stabilization (which are techniques being adapted by more filmmakers and more departments) but this reframing also allows for a much better transition to varying aspect ratios in different deliveries.  For example, GDT was photographed in 5K 2:1 and the theatrical release aperture is 2.40:1.  But with the 20% padding, the same plate was used without the 2.40:1 matte which made it tall enough to be used in 1.78:1 versions that are required for different broadcast deliverables.  This means the film did not have to go through the typical 1.78:1 blow-up in order to fit 16:9 correctly.  The result is a visible improvement in image quality for broadcast deliverables, which is increased by shooting 5K for 4K.

On top of this, by doing a center extraction, there turned out to be (what I consider) an accidental benefit:  By shooting 5K with a 4K center extraction (or “5K for 4K”), images shot on the EPIC undergo a subtle texture change.  The texture becomes a bit smoother because the bayer-pattern pixels are not scaled down from 5K to 4K, rather they are instead cropped to 4K.  While still clearly boasting a 4K feel, the subtle difference between a scale and a crop presented what I consider an aesthetic benefit that came somewhat unexpected.

Below is the framing chart for the EPIC on GDT that was created by AE Tyler Nelson.  Because EPIC’s native resolution is so significant, one can create their own custom extraction that suits the project.  There are numerous technical benefits, such as enhanced look-around, lens millimeters re-line up, older or wider lenses do not vignette, etc.   But the slight difference in image texture due to the crop is the one that I think many people will like.  There are many ways to get the image texture of any camera to be different based on optics, but when that option isn’t available or ideal, consider this technique which is responsible for the pixel texture you will see in 4K and 2K projections of the film.



Sometime during the long nights of delivering the film, I was passing by my server room on my way to deliver a shoebox drive to the vault.  There was a point where I stopped and something hit me I hadn’t realized before: I was staring at all the blinking lights in the doorway of the server room (as most people know, blinking lights represent drive read and write accessing.  -The more blinking, the more disk activity) and I it struck me how small and powerful the overall arsenal of tools has become in order to produce the type of content we’re producing these days.  In other words, what struck me as remarkable is that the hardware infrastructure required to move, manage and manipulate all that 4K data was manifested right in front of me in those tiny blinking lights.  Like a bulldog, this small array of technology was all it takes to push movie after movie out, 2D, 3D and 4K, version after version after version.  I was recently at a post production facility auction and watched tens of millions of dollars of a 5-10 year old infrastructure go for a fraction of the price. After seeing powerful and popular equipment literally given away at the auction, it was clear that the tools that lost all of their value were tools that performed a single task. This small array of computers is exactly where the name “Light Iron” comes from: the blending of both light and big-iron systems together to stay nimble, remain efficient and manage the simplest and most complex tasks respectively.

With little exaggeration, the GDT DI required the use of just a few main components pictured below.  2x Quantel Pablo’s, 2x DVS Clipsters, 2x 12-core MacPro’s and a few dozen terabytes of storage that was optimized for multi-stream 4K playback.  This is not the only way to do a 4K DI, but my advice to people exploring 4K DI is to make investments in systems that perform dozens of tasks and lower the reliance on tools that are powerful, but specific to a single job.  Most super-computer systems people can buy today are capable of numerous tasks and cost less than single-task systems 5-10 years ago.  Our infrastructure is a good example of one way to get the job done, which is why we started with a single set of this gear combination and continued to duplicate the tools as the company grew.



About midway through the DI of GDT, I went into the theater to talk to Ian and he told me to sit down as he wanted to show me something.  He then pulled up some sections of the film which had only recently received their first color pass.  He told me to watch this scene play and pay close attention to the skin tones.  ”There’s that term again,” I thought, “skin tones…”  Skin tones is a phrase I hear thrown around all over the place (sort of likeworkflow, which I’m also sick of hearing) but I find it has become the latest flagship criticism for what makes a poor digital camera image.  In the past there have been numerous (what I call) “flagship criticisms” of digital cameras such incorrect framerate, low resolution, shutter type, deep depth of field, weak dynamic range, limited sensor technology, and so on.  Today’s flavor of digital criticism just happens to be skin tone and tomorrow it will be something else.  You watch the goal posts move…

Anyway, Ian has always been able to get good skin tones on numerous cameras, but this was something different.  Much of what one can get in good skin tones does, in fact, start with the camera.  Greater bit depth and more resolution is certainly going to help, but it also comes down to the exact range in which the skin is initially exposed. This critical range, perhaps just at or over key, enables an image with massive bit depth to undergo significant and more precise separation.  This pushing and pulling of the image at the perfect exposure in the exact area of skin allows a DI artist to really find a way to reveal what is in the skin.  Ian says to me “magazines have convinced a lot of people that good skin tone is about concealing of detail…sometimes to the point of a blatant blur.  But beauty in faces shouldn’t be concealing, rather revealing.”  Ian goes on, “When I work on a film, I challenge concealment (in a sense) by attempting to reveal everything in detail.”  Ian doesn’t mean he wants to show off wrinkles or scars, but the more you see of someones true face, the more their face can be read thus the more realistic or perhaps better they look.

So Ian then plays a few scenes that demonstrate this very well.  In reel 8 (about 2 hours into GDT) there are some good examples of this technique through a few shots of extreme closeups.  These shots are truly full screen faces on their sides-to which Ian spent a lot of time massaging this sequence to pull as much color and separation as he could.  Ian said to me “When you at your hand you will see the real nuances of what makes up true skin tone.  Human skin has yellow, red, green, blue, brown and subtle colors in-between.  I worked hard to isolate this outer-beauty and brought out as much of these subtleties as I could and let the millions of colors in their faces reveal exactly who they are.”

Doing this isn’t as simple as an actor having good skin.  In DI, we need to have a color pipeline, a color tool and color talent that is designed to identify, manage and preserve this level of color separation.  For more than a decade, people have been using film emulations and lookup tables to act as transforms of digital images into film.  But what we have been able to observe is that if you look at things through a film lut, you are blending some colors together.  So the dimension of some digital subtleties may go into a film lut and come out the other end as a single color.  This doesn’t mean you can’t get good skin tones until now, but we believe it does mean today’s level of precision has improved over film and that the bar, once again, is raised.  This is one of the best reasons to let files behave natively instead of filtering them through a film lut transform.  GDT is one example from Ian’s work that followed this design and complex, controlled and revealing skin tones are a direct result.


GDT is a film that is truly taking advantage of the times we live in.  While GDT is not at all the first 4K film, it is the first 4K film to be seen by mass audiences thanks to very recent developments from a number of agencies and technologies world wide.  The exact numbers are difficult to quantify partially because they are changing so much and partially because they are managed by numerous companies. But the second half of 2011 showed a tremendous leap in the long-promised 4K digital cinema rollout and GDT happens to be perfectly timed to take advantage of the progress.  Some estimations suggest 60% of screens are digitally screening GDT world-wide.  That could mean people on average have a more likely chance of seeing it digitally than a print – even in smaller cities.  There have been publications that predict the 2012 summer rollout may reach their target of 75% digital conversion in North America mid next year. This is some of the best news for cinema in general and is going to give to theaters what HD did for television in 2004.  SONY produced GDT and as a technology company, made the right choice in preserving this film for the future.  When a project is green-lit, not everyone is thinking about 4K and the way digital films will look in the future. That’s why shooting 4K is still a relatively new concept.  And it is clear by watching the industry that some people are realizing the impact of 4K in capture and others are clearly not. But once people commit to shooting in 4K, the next phase is to convince people to do the VFX in 4K, which is difficult and often is the main barrier to 4K finishing.  The next phase is to master everything in 4K, which is rare but happening more and more.  The last and most difficult phase is to take the entire package and distribute it in 4K on both DCPs and 4K prints.  Alongside the filmmakers, SONY seemed to clearly recognize the importance of this need, and have been pushing 4K on a lot of their Columbia films slated for 2012 release including The Amazing Spiderman. SONY professional has reportedly sold approximately 17,000 4K projectors which are being installed this year and next.  Of 45,000 screens in North America, it is possible that over 1/3rd of them will be 4K by the time SONY is done with the installations in 2012.

What all of that means to me is that the creativity behind the crafting of these masterpieces can finally be seen by audiences without excuses.  Each of us as filmmakers-whatever our department-need to consider the impact that 4K digital cinema has on what we do for a living.  For many, 4K has become a criticism because of how it impacts the process. But I believe a serious impact like 4K gives us the opportunity to measure ourselves and therefore find motivation in how we change instead of asking should we change.  4K should change how we use makeup.  4K should change how we dress a set.  4K shouldchange how we perform, direct, shoot, edit, affect and manipulate because we are no longer able to hide behind the imperfections of an exhibition format long overdue for extinction.  If there is a criticism that “digital shows all” then I am totally for it.  If the net iteration of your craft reveals more, then learn how to use it.  I am a firm believer that it is unwise to change something for the sake of change.  But I strongly believe in radical change when the change is unquestionably superior.  Digital 4K capture, 4K effecting and 4K distribution is the 1-2-3 punch that movies have desperately needed ever since digital tools stepped into the ring.  15 years since the early experimentation of digital intermediate, we finally have the tools in place to do something that motion picture film was never able to do before:

Until now, audiences were given copies of copies of copies in order to see a movie. A four-point variation in color balance per channel and per reel was considered acceptable.  Color balance, softening, distortion over time, high speed printing and lower-cost release stocks all contributed to making the mass distribution of great films a second-rate version the source at best.

Today, for the first time ever on this scale, thanks to more than 5 years of infrastructure and development of end-to-end 4K, mass audiences will see pictures from The Girl with the Dragon Tattoo as good as the filmmakers who created it.

It’s about time.