Source: Post Magazine

Jeff Sengpiehl recently joined Light Iron as the VP of Engineering, responsible for the design, management, and security of technical systems and software across the company’s six U.S. locations.  Previously, Jeff was Chief Engineer at Chainsaw and Broadcast Operations Manager for ABC Television.  Here, Sengpiehl speaks with Post about the studio’s high-end storage needs.

Describe the various ways Light Iron needs storage.

We have various tiers of storage based on speed, sharing needs, and price points.  For our finishing services, we need ultra-high speed storage that is shared via an ultra-high speed network.  For offline, we need high-speed storage that is shared via a high-speed network.  These both come with high price points.

For dailies, we are now needing decent speed storage that is shared to maybe one or two stations, which costs a decent amount.

For archiving, we need to be able to access all of the above, at a fast speed, so that we can get that drive’s space available for new materials quickly, or to move from SAN to NAS   It becomes a storage price problem.  This now also costs a lot, as it’s crossing storage platforms.

So you use both NAS and SAN?

Yes, both can be dialed for various speeds and quality of service to meet production needs.

Given Light Iron works on both feature and episodic projects, how do your storage needs compare or contrast across both formats?

Episodic work is now happening at the frame sizes and codecs that used to be only the domain of features.  The difference is really the life cycle of the media.  Features exist in totality longer, while episodic shows are a checkerboard of cycling “featurettes”.

For episodic streaming projects, are there differing storage requirements?

Yes, episodic streaming is generally larger frame sizes for delivery then traditional broadcast episodic, so it’s not possible to hold storage needs at HD levels by re-sizing material to 1080p.  Its entire life cycle lives at UHD minimum.

How does cloud storage fit into the mix?

Cloud storage is a very inexpensive, easy to share, but slow to access tier.  It is the most flexible storage available, but pipes to the storage make its use as even a near-line tier inadequate for even a near-line workflow at this moment in technology and networking.  Cloud storage is excellent for disaster recovery, business continuity, and leveraging certain cloud processing and AI services.  But it’s a giant bathtub you don’t own, and you can’t drain or fill it without the pipes to service your facility.

How do you evaluate storage technologies and vendors?

Carefully!  There’s hard-won experience by trial and error, hopefully without affecting continued work.  On occasion you’ll get a greenfield build, with time and space to test something new and different.  Sometimes the specifications from the manufacturer sound like the best thing ever, but in grinding, day after day, disk-blasting use, they just don’t cut it.  The ultra-high speed tiers in dedicated architectures are the most difficult to try out.  Either you can’t stop long enough to plug them in, or you put them in, they work beautifully, and then you discover you’re completely dependent and you can’t live without them.

How might storage evolve for production and post in the future?

As the price of SSD and Flash storage drops, and size and reliability grows, the footprint, power use, and cooling needs drop off dramatically.  Petabytes in the space of a briefcase aren’t possibly too far off.

 

 

Bitnami