Categories
Post Production Production

Review: Blackmagic Pocket 6K

The new Super35 6K Pocket Cinema Camera from Blackmagic was delivered into a few lucky people’s hands last week, and my friends at Stray Angel Films were one of them. They gave me a week with it, shooting some stuff for my artist friend Chase Lock surrounding his new gallery at Olympia Club in Santa Monica, and I’ve put together a sort of review/overview for your edutainment. I haven’t used the Pocket 4K before, so these are wholly “first impressions” with minimal comparison as the cameras are functionally identical anyway. There’s a shared user manual you can check out here, and you can see the footage in the video embedded at the end of this writeup.



To start, the camera is quite easy to use. The button layout and menu all are quite intuitive and I never found myself “searching” for features. At times the touch screen on the back could be “too simple”, where I’d out-think myself (“Where’d that go again? Oh there’s another page!”) but once that happens the first time you don’t forget. While recording (or not) you can change ISO, White Balance, Shutter Speed, F-Stop, trigger the one-shot AF and AE, or even take a 21mp DNG still image. There are redundancies, so if you’re more of a touch screen person than a button-pusher, you’ve largely got those options. There’s also a “Slate” feature you can access by swiping left or right on the touchscreen where you’ve got lens and filter data, Reel/Scene/Take, a “good take” toggle, shot type, Int/Ext and Day/Night toggles. There’s also an overall “Project” metadata section where you can put in Project Name, Director, Cam Op, and Camera Designation (A, B, etc). That was fun to find.

When I pulled the camera out of the box, it was already set at 6K BRAW, 8:1 compression, 24fps, 180 shutter, and 400ISO. Exactly where I was going to put it. I quickly threw a 256GB CFast card in there, my Sigma 18-35mm, and was ready to roll. A nice feature I noticed with the Pocket 6K was a sort of “dummy proofing” of the Card Format feature, where you have to hold a button down for a 3-second countdown. Said feature is easy to find, simply by tapping the UI where it shows your card/time remaining. You can format to exFAT or OSX Extended, depending on how you roll, and can use either a CFast card or SD card. If you go the SD route, you’ll want to make sure it’s as fast as you can afford or else the camera might not be able to write to it depending on the size/speed of your footage. The safe bet is to stick with CFast or get a USB-C SSD to record to (which honestly might be cheaper). The Samsung T5 is a popular choice but I believe most drives will work, such as my beloved GDrive Mobile SSD.



You can record to BRAW or ProRes, but some formats are restricted to one or the other. Essentially if you’re shooting in the 6K neighborhood you’re in BRAW, if you’re shooting 4K or under you’re in ProRes land. From my experimentation, the file sizes are basically the same so you might as well shoot BRAW. Surprisingly, my PC had absolutely zero issues editing the 6K raw files. Not a single glitch, hiccup, or freeze. However, you do have to edit said raw files in Resolve (which you get for free when you buy a Blackmagic camera). It is possible to edit in Premiere but that requires a third-party, pay-for plugin. Which sucks. Depending on your situation, ProRes might be the move. For some of my Filmtools reviews, I just shot 4K ProRes because I a) wanted to edit in Premiere and b) didn’t need to be doing any color corrections that raw would help out with.

In regards to coloring the BRAW files, I found them incredibly easy to edit and very flexible. There’s a surprising amount of data in those raw files, enough to where technically I just had to make sure the histogram wasn’t pinned in either direction and I was good to go. The sky never blew out and the shadows never went black. Even at night, I was shocked at how well the camera handled (which I’ll get to in a moment).

Working with the camera, I just had a simple half-cage with a top-handle, complete with 15mm rod support, and was using the Steadicam AIR for support (which will be reviewed in a later article). I didn’t find myself wanting for anything else for the most part, but an external monitor or eyepiece would have been nice as the screen on the back is highly reflective and you can’t reposition it, which can cause issues when you’re trying to check focus or frame up your shot if you’re not directly behind the camera. Pulling focus was easy enough with my hand but if I was in a more professional shooting situation I would have hooked up my MicroRemote follow focus.



In regards to the rails, I was using them to hold on to and get some gentle purchase on the focus ring while moving and for holding the battery plate hidden back there.

While I was shooting I didn’t find myself asking any “dumb questions” like “where is Feature X” or “how do you…” which I believe speaks to the intuitive nature of the touchscreen. However I did find myself asking “why does the AF suck so much!?” I would hit the AF button, and the hunt would begin. And then it would catch focus aaannnddd lose it again. And then give up. The camera doesn’t have too many downsides that I could find, but the Autofocus is truly abysmal. Oh well. Auto Exposure seemed to do its job but I didn’t use it.

While I opted to use a cinema battery -DTap to 12v- which lasted me all day, I did run a test to see if the reported poor battery life of the Pocket 4K had carried over. It has.

I turned on the camera and started recording to a freshly formatted 256GB CFast card and got the promised 43minutes of 8:1 raw stuffed in there before the camera died 7 minutes later. Assuming you’ll have the camera on but not running for longer than that, I highly recommend either investing in tons of LP-E6 batteries or simply getting a nice big 150w cinema battery with a DTap port. I had mine on a plate which was attached to the camera (giving it a bit of extra heft, which was a nice addition to the resulting footage) but you could easily chuck it in a backpack or side bag or something if you wanted. Go crazy.

Another issue that will require a proverbial tissue is the lack of an IR cut filter in the body. While testing the Atlas Orion EF Anamorphics, we noticed a horrendous amount of IR interference when using our ND 1.8. To show how this is unique to the 6K (and all Blackmagic cameras not named “Ursa Mini Pro”), we made a video running through a few strengths of ND/IRND/WSNDs on the Pocket 6K, Arri Amira, and C100mkII. The aforementioned tissue you’ll need to get is either an IRND of some kind (different ones perform better or worse, that’s a test for a different day) or a dedicated IR cut filter that you’ll combine with whatever ND you’re using.

 

Aside from the infrared pollution (or perhaps including it) I found the images and colors I got from the Pocket 6K to be very pleasing, although perhaps not entirely “accurate”. I say this with as little emphasis as possible as I didn’t find it to be “bad” at all and rather enjoy the “stock” look of the Pocket 6K, but you may need to do a bit of color correction if you hit a situation where color accuracy is paramount, in my case the paintings I was filming. Creatively (insofar as the video is concerned) it didn’t necessarily matter, but the artist noticed immediately and his point of “it should look like the paintings look” outweighed my counter-point of “but it’s pretty already!” Lesson learned. It’s also just a matter of making sure you take care to color your images, I was rushing. In regards to other imaging issues, I did notice a touch of rolling shutter when panning around quickly, but nothing to write home about. There can also be a bit of moire in certain situations that a OLPF might take care of, but I didn’t notice it often. The one time I’m thinking of was on a canvas that was side-lit.

The really impressive feature of the 6K was the low-light performance. Something everyone begs for but rarely is delivered on. At night we went into Downtown Los Angeles and filmed some extra stuff for the promo, and at 3200 ISO I was astounded at how clean and legible the picture was. There’s some noise, but honestly, it’s not fixed pattern (so it looks “filmic”) and if you’re not looking for it you really don’t see it. The camera apparently goes up to 25,000 ISO but I didn’t find the need to go above 3200. Perhaps if I was filming in ProRes, but in raw I had plenty of information.


The main thing here that I think people should key in to is that this camera isn’t wholly better than the 4K, but it is a direct upgrade. The Super35 sensor is essentially a film industry standard, where M43 is not, and the EF mount is just as ubiquitous. Using a Speedbooster may get you an extra stop of exposure, but I can’t say this camera needs it and that means you’ve invested in EF glass anyway. Plus you’re putting more glass between your lens and your sensor, which could potentially degrade your image depending on the quality of it. By going for the S35/EF standard, Blackmagic has brought the Pocket Cinema line into a weird place where it’s not really a “cinema” camera per se, but it sure smells like one. As an example, there are very few budget cameras out there that have an Anamorphic mode. The 6K does. With the aforementioned Atlas Orion lenses, you’ve got a very attractive package for very little relative cost if you’re looking to shoot your next project Anamorphic. At $8,000 per lens, the Orions are vastly cheaper than their brethren and can also be re-mounted to fit PL or even E/M43 (even though your camera probably wouldn’t have a 4:3 mode to use it with). That being said, if you already own a Pocket 4K I don’t know if you need to rush out and upgrade right now. The 4K is still a fantastic little camera. However, if you’re in the market for either/or, there’s no contest just get the 6K.

If you just wanted to go the usual route of spherical lenses, and want to know the general cost of ownership (which is essentially the same as the 4K), I calculated it out to be around $4500:

BMPCC6K BMPCC4K
Camera Body  2500  1330
Sigma 18-35mm  700  700
Speedbooster  650
Cage w/ Rails  464  464
77mm IRND  121  121
XLR Mini Cable  24  24
DTAP Cable  35  35
Samsung T5  164  164
Battery Plate  155  155
Steadicam AIR  400  400
TOTAL  4563  4043

So while the Pocket 6K isn’t necessarily a full-fledged cinema camera in regards to its chances on a backlot, and it has a few rough spots, it does create a really impressive image that rivals cameras 10 times its cost. I’m inclined to do some tests and see how it handles against the main contenders and where it falls apart.

Categories
Featured Production

Review: BenQ SW270C 27″ Photographer Monitor

The BenQ SW270C is a color-accurate monitor aimed at photographers and web-based filmmakers (sRGB types). BenQ sent me it to review, so here’s that with some extra colorist stuff thrown in. In my quest for color accuracy, I’ve updated parts of my rig pretty regularly, and since I don’t quite have the cash for a Flanders/Eizo or similar yet, I’ve had to do my best with what I’ve got. Monitors like the SW270C sort of bridge that gap, so I was thrilled when they let me keep it. Ethics statement aside, here are the specs:

Main Specifications

  • 27” 2560×1440
  • IPS, LED backlight
  • 300 nits, 1000:1
  • 5ms/60hz

Color Gamut

  • 99% AdobeRGB
  • 97% P3
  • 100% sRGB
  • 10-bit**

Color Modes

  • Adobe RGB / sRGB / B&W / Rec. 709 / DCI-P3 / Display P3 / M-book / HDR / DICOM/Calibration 1 / Calibration 2 / Calibration 3 / Custom 1 / Custom 2
  • HDR10
  • 5000°K / 6500°K/ 9300°K / User Mode
  • Gamma 1.6 – 2.6 & sRGB

Connections

  • HDMI (v2.0) x2
  • DisplayPort (v1.4)
  • USB 3.1 Downstream x 2, Mini USB x 1
  • USB 3.1 Upstream x 1
  • USB-C (PD60W, DP Alt mode, Data)

Extras

  • Headphone Jack
  • Anti-Glare
  • 16bit LUTs
  • SD/SDHC/SDXC/MMC Reader


To start, the monitor comes in a VERY comprehensive and protective box with a calibration report for your specific monitor in a nice little sleeve, which you can check out at the bottom of the article.

The monitor itself has a thin bezel and looks almost identical to the PD2700U, which I like. It looks simple but has TONS of connections in the back so it doubles as a USB 3.1 hub as well. Speaking of USB, you can power it and use it as a monitor with one USB-C, like the Samsung I reviewed earlier. It also comes with the “G2 Hotkey Puck”, which allows you to scroll through your monitor’s menus quickly and program 3 hot keys to, for instance, swap your inputs. 

If I had to describe the difference between 4K and 1440p, which this monitor is, I’d say it’s “Sharp vs Smooth”. Images on the SW270C look clean, almost vivid in comparison to my other monitor(s), and I can’t see any aliasing or distortion of any kind. The calibration report says it’s 99-100% uniform across the panel and I have no reason to disagree. On the PD2700U, I seem to perceive more detail but as it’s 4K, that makes sense.

Games and movies look fantastic on it, which should come as no surprise, and I do seem to see more colors when outputting from Resolve via my Decklink than on my main monitor, which is great. The DP-out from my video card looks equally as snappy. It also is capable of storing 16bit LUTs in the monitor itself, which is generally a feature reserved for more expensive panels.

I should probably do a color-accuracy article, but basically a Hardware LUT allows the monitor itself to do the correction instead of Windows/MacOS or your Video Card. Good stuff.

At the beginning of the year, BenQ sent me their 4K PD2700U to review which was a huge upgrade from my 1080p Dell Ultrasharp at the time, but for whatever reason I trusted that the Rec709 and sRGB modes (which I’d use most often) were “as-advertised”, covering 100% of each spectrum. After calibration, the panel looked great so I had no complaints. It wasn’t a monitor designed for color-accurate work, but it seemed good enough with those features.

When I received the SW270C, with “Photographer Monitor” printed on the box, I finally had the good sense to actually pay attention to the report DisplayCAL kicked out when re-calibrating all three of my monitors (which includes the aforementioned Dell) and this is where I finally, somewhat embarrassingly, learned that “Covers X% of Y Spectrum” isn’t the same as “Displays X% of Y Spectrum” in every scenario. To Wit:

  • BenQ SW270C:
    99.55% sRGB
    86.4% P3
    96.65% Adobe-RGB
    ACCURACY_dE76_avg 0.131518
    ACCURACY_dE76_max 3.250698
    ACCURACY_dE76_rms 0.419139
    GAMUT_volume 1.43584093207
  • BenQ PD2700U:
    97.4% sRGB
    76.5% P3
    71.2% Adobe-RGB
    ACCURACY_dE76_avg 0.220723
    ACCURACY_dE76_max 2.498796
    ACCURACY_dE76_rms 0.486758
    GAMUT_volume 1.08339033946
  • Dell U2414H:
    84% sRGB
    62.7% P3
    60.4% Adobe-RGB
    ACCURACY_dE76_avg 0.148501
    ACCURACY_dE76_max 1.632329
    ACCURACY_dE76_rms 0.335529
    GAMUT_volume 0.885710444917

As we can see, the SW270C is actually accurate. Accurate enough for my use at this stage in my career as a freelancer who exclusively delivers to the web, anyway. As far as the Delta-E and RMS stuff… well honestly I don’t know what that means but I’m hoping you do and it’s something useful.

Now, the slight variation in coverage-vs-advertised could easily be a screw-up on my part, or maybe I have to have a certain setting toggled to unlock some extra percentage points but since I don’t know what I don’t know, I’ll take 99.6%. For clarity, I adjusted my displays based on what DisplayCAL told me to do, so I was on the “Custom” color modes having only adjusted the Red and Blue channels by one percentage point on the SW270C. Out of the box, I’d be more than comfortable using the various color modes that come standard on the display.

It’s difficult to describe the visual difference, and a screenshot or photograph wouldn’t do it obviously, but the best I can do is say that there’s more “depth” in the image, almost like saturation. The jump from the Dell to the SW270C is obviously pretty aggressive, but between the PD2700U and the SW270C, it’s slightly more subtle but still noticeable.

Using skin tones as our barometer, it’s easier to see or even notice the variations in people’s skin. For instance, splotchy skin seems more uniform on the Dell, which would lead you to not correcting it out, only to be discovered by the audience or (perish the thought) the client. I was going to show some examples and talk about something I’ve recently graded on this monitor but I’ve been under a bunch of NDAs recently so basically, I can’t. Oh well.

If you’re the average photographer/filmmaker looking for a great, accurate monitor you’ve found one. Potential problems arise with your system’s calibration to that monitor, and that’s why you need something like an i1 Display Pro* and DisplayCAL (by my recommendation) to make sure you’re getting the best out of your system. DisplayCAL also allows you to calibrate your Decklink/Resolve specifically to your display via some IP address connection trickery that’s above my pay-grade.

As in any business confidence is key, and having confidence that your image looks exactly how it’s supposed to is no different. For $800 at the time of this writing, this monitor seems like a solid investment on your path to color perfection.

 

**ON 10-BIT and FRC

Now, something that I’ve learned through all this is that there’s 10-bit and then there’s “10 bit”, which is 8-bit with some magic dithering type behavior called Frame Rate Control to show more colors perceptively. This monitor, and many like it, is the latter. I was unaware of this technology, but according to BenQ, a “real” 10 bit would cost $1000+ so it’s a cost-saving measure for the consumer. Per Wikipedia, “FRC is a form of temporal dithering which cycles between different color shades with each new frame to simulate an intermediate shade. This can create a potentially noticeable 30 Hz flicker. FRC tends to be most noticeable in darker tones, while dithering appears to make the individual pixels of the LCD visible. This method is similar in principle to the field-sequential color system by CBS and other sequential color methods such as used in Digital Light Processing (DLP). 8-bit TN+film panels with dithering are sometimes advertised as having “16.2 million colors”. Some panels now render HDR10 content with an 8-bit panel using frame rate control.” There’s some extra reading you can do via Sematic Scholar if you want to know more.

While I can’t yet speak to the functional difference as I don’t have a true 10-bit monitor to use and compare it to, I can say that I don’t see the “30hz flicker” Wikipedia refers to nor can I see any downside to this method (and I do mean “see”). The PD2700U also uses this method of 10-bit trickery. This has sent me down a new path of discovery, so I’ll have updates for you as they come. Exciting times!

Categories
Post Production Production Uncategorised

An interview with “This is America” editor Ernie Gilbert

Recently, the music video for Childish Gambino’s This is America won one of two Grand Prix prizes in the Entertainment for Music category at Cannes. Jury president Paulette Long described the video as a cultural phenomenon, saying that “Every so often a video comes out and it points a finger and makes us admit that we need to do something different. When I first saw it, I was shocked, I was stunned and I thought it was brilliant.”

As the video was done in Premiere, Adobe gave me the opportunity to have a talk with the editor of This is America, Ernie Gilbert, who has also worked with Donald Glover and Director Hiro Mauri on the show Atlanta.

From Ernie’s site, he has edited music videos for the likes of John Legend, Trippie Red, 2 Chainz, ScHoolboy Q, Linkin Park, Portugal the Man, Shawn Mendes, OneRepublic, Death Cab For Cutie and many others. Combined these videos have over 1 Billion views on Youtube.

In television, Ernie has assistant edited on Emmy, Golden Globe and Peabody award-winning shows like Baskets (FX), Atlanta (FX, and Barry (HBO). Recently he’s had the pleasure of cutting the Amazon pilot for the reboot of the BAFTA award-winning show People Just Do Nothing. His edit for the Drew Michael HBO Comedy special was described as “…the Most Polarizing Comedy Special of the Year” by the New York Times. He’s currently editing an unannounced HBO project. 

In commercials, Ernie has worked with some of the largest ad agencies in the world and brands like Jordan, DirectTV, Banana Republic, Reebok, American Eagle, and Fox Sports. His work has premiered during the Daytona 500 and been seen in Times Square.

In 2019, Ernie will be making his narrative directorial debut in the form of two short films titled Nine Minutes (Constance Wu, Reggie Watts) and Easy 8 (Byron Bowers, David Rysdahl).

EGI002

KENNY: YOU’RE A PRETTY MULTI-FACETED GUY, AND THESE DAYS YOU KIND OF HAVE TO BE A LITTLE BIT OF EVERYTHING, BUT DO YOU MAINLY SEE YOURSELF AS AN EDITOR OR DIRECTOR?

ERNIE: That’s a really good question. I’ve straddled that line now for like a decade [laughs]. Like, straight out of college I made my money directing and editing music videos being based out of North Carolina where my rent was $300 a month. It was out of necessity, I couldn’t afford an editor on a $5,000 music video if I wanted to be able to live, so I cut everything myself and really grew to appreciate how much happens in the edit and how much is made in the edit, and I think I’ve kind of been straddling that line since. I’ve directed a couple of shorts, I have one coming out early next month that stars Constance Wu from Crazy Rich Asians called Nine Minutes… but I love editing. It’s how I’ve paid most of my bills since moving to LA back in 2012.

It’s probably the least sexy of the two but there’s so many opportunities. The biggest secret to editing is it just takes time: the time to get perspective on a cut, the time to dig through all your options and try things and make mistakes… so what ends up happening is everybody wants to go shoot the latest music video, or everybody wants to direct and be in charge creatively, and that kind of leaves a void in a lot of different circles where if you know how to edit and get on ten people’s list as somebody they like to work with you’re never going to not have work. The fun side effect for me, as somebody who wants to direct and tell my own stories, is that I get to sit in the room with the showrunner, or with the creator, or with the director, and be a part of that conversation and be a part of those creative choices. If I was like, a Camera PA I would work on a show like Atlanta for 40 days, I would never interact with anyone creatively outside of maybe my department head, and then I have to find another job. Whereas being an editor, or an AE, I’m on a show like Atlanta for six months and I’m in all those conversations, I’m watching all the rough cuts and watching all the dailies… I find that to be creatively fulfilling.

SO YOU GOT YOUR START IN NORTH CAROLINA SHOOTING TOUR VIDEOS FOR BANDS, WHAT DID THAT LOOK LIKE?

Yeah I was at the University of North Carolina Chapel Hill. We had a TV show called Music Seen, and it was just live concerts; bands would come through town and we would shoot four or five MiniDV cameras, edit it together, maybe do an interview with the band, and then air it on our Student TV Station (YouTube wasn’t a thing yet). This is was when I was in college, ‘05 to ‘09, so this was a time where not every band had access to gear. Like, there was no good camera phones (like the iPhone now shoots better quality than what I could have shot back then) but because of that, video was still kind of special, ya know, it wasn’t as accessible. I found very quickly that bands wanted that content, and it was just kind of the next gradual step like “okay we filmed this live band, what else can we do with that?”

I actually did two National Tours, one with a band called Sullivan on tooth & Nail Records, and I ended up making a feature-length documentary about them and in the process of making that they broke up, which made for a good documentary, where we got to see their struggle living on like $5 per diem and sleeping in Walmart parking lots… I got deathly ill that tour with like the flu or something, but it was amazing!

I got to tour with another band called Bay Side, they were on Victory Records at the time, basically running camera during the day while they were doing stuff, filming their concert in the evenings, and then going back to the tour bus loading up a couple of Firewire 800 drives on a little table and trying to cut together content that they could share from the road. Ultimately those things got pressed onto DVDs and special edition CDs back when people are still buying that stuff.

SPINNING DRIVES MUST HAVE SUCKED ON THE ROAD, BUT THAT SOUNDS AWESOME WHAT WAS YOUR WORKFLOW LIKE?

I was with them, shooting and editing, dumping P2 cards as the bus bounced out down the road hoping that my hard drives wouldn’t crash…

YOU WERE THE ONLY ONE?

Yeah it was out of necessity, like, I think for that Bay Side doc I got paid $500, and at the time it was the “most money I’d ever heard of!” but in hindsight they got a really good deal [laughs].

AND YOU WERE ON PREMIERE THEN?

Actually at that point it was still Final Cut 7, which is what I learned in high school. I made the switch to Premiere I think around CS6, back in 2012. I saw the writing on the wall: Final Cut X came out, didn’t have what we wanted, and basically found that with Premiere I could set my keyboard shortcuts to “Final Cut” and I was basically cutting within a day. From there I signed up for Creative Cloud when that came out and I’ve been using that as my main NLE since.

VAGUELY ON TOPIC, WITH YOUTUBE BEING THE MAIN SOURCE OF VIDEO NOWDAYS I SEEM TO NOTICE A TREND WHERE SO-AND-SO DOES SOMETHING FANCY, OR A NEW PRODUCT COMES OUT, AND EVERYONE TAKES THAT AS THE THING TO DO OR BEAT. THERE APPEARS TO BE A LOT OF… I DON’T WANT TO SAY PLAGIARISM BUT JUST A LOT OF THE SAME THING; PEOPLE JUMPING ON TO TRENDS AND THAT’S THE GOAL INSTEAD OF MAKING NEW STUFF. DO YOU HAVE ANY OPINIONS THERE? DO YOU THINK THERE’S A SIGNAL-TO-NOISE RATIO PROBLEM OR IS MORE BETTER?

Personally I’m a big fan of the democratization of filmmaking. I think it’s really cool that like, if you have Creative Cloud you have the same tools that Emmy award-winning TV shows or feature films have, you know? You have that tool set that you can fit in your living room in North Carolina and you can learn those skills. I think that’s really cool that you don’t have to like, buy a $30,000 AVID or whatever they used to cost.

As far as things being duplicated or ripped off I mean I think that’s just kind of the creative struggle amplified, right? We want to be able to reference things, we want to be inspired by things as creators. I know in “music video land” something that’s changed since I started is treatments have gotten very photo-centric. There’s a couple famous Spike Jonze treatments where he just wrote three paragraphs on a typewriter and sent it off to The Pharcyde or whoever, but now a treatment is 14 pages of reference images and all these things… these days people want to be able to know what they’re getting before they get it, but I think that can be kind of restricting because… what if I have an idea for something that hasn’t been made yet and now I have to present reference images of it? That just means I’m presenting reference images to things that have already been made and now all the sudden this thing has shifted into something similar that’s already been made.

I also think too like, culturally and creatively we like seeing something cool and want to do it ourselves and now that the tools are accessible to everybody… I feel like when the DSLR Revolution happened everybody was like “okay I’ve got to shoot everything wide open”, the gimbals came out and everybody’s like “now everything’s steady”, the drones came out and everybody’s got drone shots in everything… we like seeing stuff and then being able to do it ourselves. I think getting all those tools in new hands is really cool for the industry in the long term because it means that you can have a kid winning Sundance at 20 or whatever. You can have new voices that otherwise may not have been able to tell the story they wanted to tell because they don’t have access. I think it just goes to show that you kind of have to up your game a little bit you know? You can’t just coast by on good production value. You’ve got to have something to say, you need to have a perspective or a unique vision. You can’t just go “okay it looks good, now it’s out there”

WE’LL GET TO THIS IS AMERICA IN A SECOND, BUT IN TERMS OF EDITING I SAW YOU ALSO DID JUICE BY LIZZO WHICH INVOLVED A LOT MORE MODERN EFFECTS-TYPE STUFF AND OBVIOUSLY ISN’T DONE IN A COUPLE TAKES. WHAT WAS THAT PROCESS LIKE, WERE YOU DOING ALL OF THAT YOURSELF OR DID YOU PASS IT OFF TO SOMEONE?

Both. I like to do as much as I can in the offline edit and temp those sort of things out. I find that the best workflow practice for me is “let’s get it as close to final as we can in the offline” because with a music video, your client at the end of the day is the artist and you’re technically working for the label, the management companies going to weigh in with their opinions, but at the end of the day if the artist doesn’t like it, you’re done. So with me, anytime there’s a creative intent from one of my directors that we’re trying to sell through to an artist, I want that as close to what the creative intent is going to be as possible because otherwise you’re fighting battles that you don’t need to be fighting. I can’t tell you how many times I’ve been working on a video it’s like “that’s not done yet, just tell the artist it’s going to look cooler when it’s done” and you can say that in an email but until it looks cool it doesn’t look cool. There’s been issues with effects-heavy videos that I’ve done where we get notes back like “the energy is not right”. And it’s like, the energy’s not right because you’re looking at an artist on a green screen not the effect that we’re going to apply, so I try to do as much as I can in the offline.

For the Juice video, our director Quinn had a graphic designer friend who was mocking up kind of what the “QVC” logo would be, the late night logos… so I’m dropping those things in and I’m using different plugins and effects to try to add the channel change glitches and the VHS look again just to sell it through as much as we can.

USING, WHAT, PROBABLY THE RED GIANT PLUGINS?

For that one yeah, we used a lot of the Red Giant stuff.


SO GETTING IN TO THE MEAT OF THE MATTER, HOW’D YOU GET LINKED UP WITH HIRO MAURI? WHAT DOES THE ROADMAP TO THIS IS AMERICA LOOK LIKE?

I started working with Hiro back in 2015 when we did the Atlanta pilot. I had just done 40 episodes of Comedy Bang Bang for IFC and Absolutely Productions and the Post Producer, Kaitlin Waldron, had just done Tim & Eric’s Bedtime Stories at Absolutely… she was talking to an editor of mine that I went to school with named Eric Notarnicola who wrote and edited on Who is America for Sacha Baron Cohen and Nathan for You (he’s an amazing Director/Editor himself) and Kaitlin was just asking for a recommendation for an Assistant Editor position for the pilot so I got the call.

At that point it was 2015, I moved to LA in 2012, I’d been listening to Childish Gambino since kind of that first EP, and I had been following Hiro basically since I moved to LA through his music video work, so when Kaitlin called and was like “Hey we got this pilot it’ll run for six weeks with Hiro Mauri and Donald Glover” I knew I had to do it and left Comedy Bang Bang early. I had a guarantee of probably eight or nine months of work there, so I gave that up and took the six weeks on the Atlanta pilot and that’s when I first met Hiro.

At that point Hiro had done 20-someodd music videos. Everybody from Spoon to Earl Sweatshirt… he introduced me to a lot of his Director buddies. He’s repped at Doomsday Entertainment for music videos, so I worked on a bunch of different music videos with some of his friends. I had already been cutting for other people, but that was kind of a good break and then a year prior to This is America, Hiro had a video for A Tribe Called Quest that he had me cut for him that was all motion control and very effects-heavy.

I don’t claim to be a VFX artist but I know enough to be dangerous, and I know enough about it to pre-visualize how things are going to work. So with that video, because it was all motion control repeating the members of A Tribe Called Quest and different versions of Busta Rhymes interacting with each other, we basically did like the rough cut in Premiere pulling selects and then immediately went into After Effects to make garbage mattes and track mattes, basically trying to sell through to the label and the artist what it was going to look like in the end.

So we were wrapping up Season Two of Atlanta and Hiro came in to the office one day and said “hey you guys wanna hear the new Childish Gambino song?” and of course we all jumped at the opportunity. A couple weeks after that I was working on some stuff for Rae Sremmurd for a director named Mike Piscitelli at Pulse Films, and I hit up Hiro and was like “Hey Hiro, you said you were doing a video for that song you played for us, who’s cutting it?” and he was like “We don’t have anybody yet, do you want to do it?” and I was like “of course!”

He sent over the treatment and as soon as I read the treatment I knew it was going to be amazing. From there it was off to the races. They did a day of rehearsal where they shot on Alexa, just as a safety, and then they shot the main day on 35mm (it was like Monday or something), we got the scans back basically on Wednesday, I had a cut together on Thursday, Hiro sat with me on Friday, and we sent to Donald over the weekend. Donald wanted to see half of one take switched, so we did that, picture locked that Monday, sent it off to VFX, the VFX came back in on Thursday, and then it was literally down to the wire with delivering where I had to run the drive to the color house MPC down in Culver City myself, wait for color to be finalized, and then upload it from there so that we could hit the Saturday release deadline.
NO KIDDING? SO IT WAS BASICALLY LIKE AN EXPENSIVE INDIE SHOOT! [laughs]

Aaahhhhhhhh I mean, Hiro talks about how (and this is him relaying this information to me so, take it with a grain of salt) they had like two AD’s on that shoot, like only two. I was just like “how did you… like, there’s a guy doing a 13-foot fall in the background! How do two AD’s pull off all of that!?” and he was like “we made it work!” [laughs]

SO HOW DO YOU EVEN APPROACH EDITING SOMETHING THAT IS ONLY A FEW CUTS, AND PROBABLY ONLY HAD A FEW TAKES OF DUE TO SHOOTING FILM? JUICE HAS A LOT OF ‘EDITING’ BUT THIS IS ALMOST ASSEMBLY, NO?

I have dumb analogies for everything here’s my dumb analogy for editing.

The director and the writer are like the chef, you know? They came up with the recipe, they went shopping for the premium ingredients, they brought that back into the kitchen, they’ve thrown it in the pan, they’ve baked it in the oven, they’ve assembled all those pieces.

As the editor I’m like the sous-chef. Maybe I made a little sauce on the side, but then it’s my job to plate the meal and gave it to the waiter to carry out to the table, so in the case of This is America, I was given 5 Michelin Star ingredients from a 5 Michelin Star chef and it was basically my job to put it on the plate, make sure it looked okay.

GREAT WORK-TO-RETURN RATIO ON THAT ONE THEN HUH? [laughs]

I’ve cut dozens of music videos at this point, and it is funny to me that This is America is the one that everybody loves, not because it’s not an amazing video because it is, but from an edit standpoint I just had to put in the most amazing food on a plate, you know what I mean? They did such a good job.

What I love about Hiro and Donald and working with them is that they are intentional. They don’t want to find it out in the edit. I’ve worked with directors that shoot everything at 60 frames a second so that at any moment they can go to slow motion; Hiro and Donald would never do that.  They shoot with a purpose with intentions and I think it shows.


IT MUST HAVE FELT GRATIFYING TO WIN THE GRAND PRIX AT CANNES, DID YOU JUST GET A PHONE CALL LIKE “HEY WE WON THIS?”? 

Basically.

I MEAN CONSIDERING THAT THE VIDEO WAS SO WELL RECEIVED AND MADE SUCH AN IMPACT, DOES IT JUST FEEL LIKE ‘ICING ON TOP OF SUGAR’?

I mean, I think it’s the coolest video ever and so of course I wanted to win all the awards, but I really hate talking about myself, or bragging about myself, so I’m like [rushed] “yeah, It won, cool!” ya know? It’s not an ungratefulness, I’m very grateful that I got to be involved, it is just like you said “icing on top of sugar”, you know?

What it boils down to for me is, I’m just I’m glad that the video resonated with people and made an impact. To me it shows that like, music videos can still be relevant in an age where often times the most popular ones are just a… recreation of a TV show or a movie from the 90s or something, you know? The fact that a video can have a message, can elevate a song… I mean, when that video came out my mom called me that day like “hey Ernie, that music video you said you were working on was in my Google News Alert.” and that just doesn’t happen with music videos.

I think what the Cannes win means for me is that, in our culture which is so quick to move on to the next thing 23 hours later, the fact that This is America is still resonating with people a year and a half after it came out is really cool.

Categories
Featured Post Production Production

Nanguang 4′ Pavolite Tubes // Tool Talk

This article marks the beginning of a venture between PVC and Filmtools, a series I’m calling Tool Talk. In it I will spend a few minutes talking about a given piece of equipment on loan to me from Filmtools after having taken it out into the world on a shoot or two to put it through its paces.

To start, I’ve been given a Four-Pack of 4’ Nanguang Pavolite Tubes. In our first episode, I shoot a couple small music videos with Corey Gray and Mark Pelli to see how the lights fare in quick shooting scenarios.

The Pavolites are RGB-W, LED tubes that are adjustable between 2700K and 6500K, as well as a full spectrum of colors. At 5600K the lights are rated at 2850 lumens with a decent color accuracy rating of 95 CRI. The dimmer is designed to fade from 1-100 in increments of 1, and doesn’t flicker or color-shift when it gets dimmer.

One thing I did note, however, is that when you “desaturate” one of the HSI colors (the rainbow colors), it does so by replacing that “amount” of color with 6500K light. To that end, I wouldn’t necessarily recommend using the saturation knob opting instead for the brightness modifier. That being said, if you’re working in cool light like that maybe it would work out okay. Definitely not when indoors using Tungsten fixtures.

The Pavolites give off a nice, even, soft light and run cool as most every LED does. When charging the battery-side of the light gets mildly warm. There are 6 buttons on the side of the fixture, as well as two knobs, and you’ll only really find yourself using the Menu, CCT (normal light temperatures), and HSI (color spectrum) buttons. The other two handle effects and one’s an Enter button.

The on-board LED screen is simple and easy to read and understand. Scrolling through colors/dimming/saturation is smooth but sensitive, and I found that it’s not relative -there are hard stops- so if you go to a different setting and turn the knob in charge of your color and then return to the HSI or CCT setting, it will reset your color to whatever is associated with where the knob now is. Make sense?

As far as the effects go, I don’t tend to use them. They do work though.

I was able to easily get readings of 5.6 or 8 when used indoors, which was plenty for me. I would shoot around f2.8 at ISO 850 so that was right around where I wanted to be exposure-wise, often choosing to turn them down in the 50-75% range. When used as background lights I would turn them down even further.

One issue with using a light like this is that you’ll likely have to bring them in close to your talent to really get the look you want. This isn’t a problem for close-ups, interviews, or things of that nature but if they’re the only lights in your kit you may run into trouble when you try to go wide. In the video above I explain how I was able to cut them out in post when I came upon that scenario by just flying the C-Stands out when the take was over and using a still from that section of footage, masked out, to use as a patch over the part of the frame with the stand in it.

An advantage to having battery powered lights as lightweight as these is you can quickly move and adjust them without worrying about them being too hot, or having to swap power outlets or anything like that.

While they’re stated to run for ~2.2 hours on a full charge, I found that I was able to use them (generally two at a time) for both the shoot with Corey Gray and the one with Mark Pelli without charging them between shoots. That being said the Pavolite Tubes come with an impressively long power cable which goes into a brick-in-a-bag that you can hang off the knuckle of your C-Stand, which then goes to a barrel connector that powers/charges the light. You can also use an adapter Nanguang has that will split off that barrel into two, allowing you to run/charge two lights off of one power cable. Very nice, and very simple to use. I can’t overstate how much I dig a long cable.

As I mentioned I was generally using two at a time, one as an overhead sort of “base exposure” and one as a key, sometimes electing just to have the overhead in play. From there I would use the other two/three lights to enhance the background of shots with some slashes of color.

Build quality is solid but does seem more “plasticy” than Asteras, almost squishy, as if there’s a layer of air between the outer sheath of plastic and the polycarbonate core of the tube. In any case, it’s not an issue, as they seem incredibly durable.

As far as mounting is concerned, I opted for Kupo Kino Tube holders as those worked well with my C-Stand setup and were the simple solution. There are other options you could use, really anything that’s meant for T12 fixtures, and Nanguang includes braided cables to hang the lights from vertically as well as plastic clamps that have two ¼”x20 screw holes on the back, I assume for location mounting or a situation where you need to get creative. The clamps are sturdy and semi-transparent so as not to block any light, which is nice. The tubes also have octagonal end-caps that allow you to place them on a surface without having them roll over, something that bothers me with the Astera offerings. On the other hand, if you try to lay the Pavolite on its back, you run the risk of accidentally pushing one of the buttons on the panel as they’re proud of the tube’s casing. Swings and roundabouts.

All in all, I found the lights reasonably bright, flicker-free, and easy to use. At just over $400 they’re competitively priced against similar lights on the market and are a solid option in my opinion.

 

Categories
Post Production Production

Don’t get the Mac Pro, Get a Puget System

There are going to be, roughly, two types of people reading this: pros and builders. As detailed in my colleague Damian’s article, the Pros are people who just want a fast rig that’s reliable but perhaps have no idea what goes into a computer or who simply don’t have the time to do the research. The builders are the ones who have been building PCs for a minute now and want to know the best combo of hardware for their use. This article will hopefully cover enough bases for both, by way of a review. In this case, a review of a rig Puget Systems built for me to see what the “fast lane” really feels like.

Puget Systems is a computer research company in Washington aimed at creative professionals with their main goal being a proxy-less workflow. They spend all their time min/maxing hardware against various programs, testing them extensively and posting their results up on their blog. It’s companies like Puget that make building a new rig infinitely simpler as you don’t have to do the tests, research, and hard labor at all. In some ways, you can just look up the Puget hardware guides and pick the top performing hardware in each category for your budget and build a rig that way. Doing that, however, means you’re your own support and IT. For some that’s fine, but for the aforementioned pros who want to worry about the work while someone else handles the hardware, you’ll be happy to know that Puget Systems does just that, literally putting their money where their mouths are.

Before we get to their rig, let’s talk about mine for comparison.

I’ve built my own computers (and built ones for others) for the past 13 years now and while I don’t know the model number of every ATI and NVIDIA card on the market anymore (I guess ATI is AMD now) I’m still relatively knowledgeable on the subject, at least practically speaking. My current computer was built at the beginning of 2017 and is comprised of a 4.2Ghz water-cooled Intel i7 7700K and NVIDIA GTX1070 sitting on a Gigabyte X27P-D3 motherboard with 64GB of G.SKILL Ripjaws V DDR4 2400 RAM. This handles most everything I throw at it in the 1080p-range rather well, adding tons of effects and whatnot aside.

For hard drives, I’ve got a 1TB BP5e MyDigitalSSD, a 1 TB Samsung 860 EVO M.2, and a 5TB Toshiba something or another. I’ve got a 12TB DAS as well. There’s a couple externals thrown around, but we’re getting into the weeds here. The drive thing will come into play later though.

I was mistaken in thinking I would receive “a” computer from Puget. Up until the announcement of the Mac Pro, if someone like Apple or Dell asked me to do this they’d send me “the new one”. Instead, Puget arranged a call in which they asked for the details of my work, what programs I used primarily, what formats I’d be editing, and so on. I told them I primarily shoot & edit 1080p AVCHD in Premiere, but had recently been editing more 4K Canon and Arri footage. From there I color and render in Resolve, usually. With that info, they decided the best computer for me would contain the following hardware:

– Fractal Design Define R6 Case
– Gigabyte X299 Designare EX
– Intel Core i9 9940X 3.3GHz Fourteen Core
– Crucial 128GB DDR4-2666 (8x16GB)
– PNY Quadro P5000 PCI-E 16GB

That’s a serious system. The computer I currently use as listed at the top of this article cost me around $1500 in 2017, a bit more with the later addition of more RAM and the water cooling. Puget’s rig came in at a casual $7,329.28 (with free shipping). So what does all that extra cheddar get you?

Before we even get into the reasoning behind the specs, you’re definitely paying for the expertise. The gang at Puget didn’t come up with the build out of thin air or by a council of forum posts, they meticulously test every piece of hardware that comes out with each program they focus on to see precisely which piece will outperform in any given build. They write about this extensively on their blog, if you’d like to see the proverbial Proof Pudding.

The reason I built my computer the way I did was that I simply wanted to get the best hardware I could find for the money I had, but to be honest, my research primarily came from the gaming side of things and I’m not optimized for any one particular program, game, or otherwise. I got the GTX1070 because nVidia and Adobe have some torrid love affair going on and it was “VR Ready” (which I took as a benchmark moreso than a possible feature I’d be using) and it was also dramatically cheaper than the then-new GTX1080 but close enough in spec by my estimation. I got the 7700K because -I believe- PC Part Picker told me to. Something about overclocking. Had I ran into Puget’s blog sooner, I may have picked something else.

My hard drives are set up in a configuration as dictated by a Puget article I had read later on. If you’re not yet down with the multi-drive system, here’s the part of the article you get to take home with you: If there’s any one thing I’ve learned from this experience, it is that a proper Hard Drive setup is the absolute first thing you should attend to if you’re looking to boost performance, and likely will give you the best upgrade price/performance ratio, all things considered.

Puget goes into incredible detail in the article, but the meat of the idea for an optimum setup is thus:

– Your main drive should be an SSD, where you’ll have your OS and Programs installed.
– Your second drive should be an NVMe drive, where you’ll have your active media.
– You should have another SSD with your Cache & Scratch, and a RAID or External Drive as your Storage and Archive.

Per the article, even just “[moving] the media cache drive off the OS drive averages more than a 6x increase in performance, [and] in some instances can be 20-30x faster!”

That’s no joke. If your programs, media, temp files, and OS are all on the same drive your computer can be the fastest thing on the block and it won’t matter because you’ve clogged up a pretty serious bottleneck. To that end, you want your media on the faster NVMe drive instead of your OS/Programs because that software won’t need the high read/write speed like your media does. Kind of a no-duh statement, but I did it backward the first time.

So, with all that out of the way, how’s the rig I was sent?

The first thing I noticed was the size and the packaging of the rig. The computer was placed back into the box the tower case came in, foam inserts and all, and then that box was put into a larger Puget Systems-branded box with extra large foam spacers for safe transport. In the box was an amazingly detailed and personalized user manual that went through exactly what each port was, what hardware was in the computer, all the tests they ran, who ran them, when they were running, instructions and FAQs, troubleshooting… it even had all the install files on a flash drive. It was shockingly comprehensive and tailored exactly to my system. The computer itself was only slightly larger (18”x9”x21”) than my current tower and slid easily under my desk where I swapped out all my connections and flipped the power switch.

Setup took no time at all. One nice thing about having a computer custom built for you by professionals is that you just have to create a Windows account and you’re done. No drivers, no long-winded setup… I was probably in menus for 5 minutes tops. Various instances of “the cloud” being useful came up like my Chrome bookmarks/accounts, my Thunderbird stuff, and they had pre-installed Creative Cloud so I simply logged in to that and installed all my programs from there. For some more common programs, I used Ninite.

A note about swapping systems and Resolve: definitely get the dongle if you’re not meticulous about your keycodes. There’s no way to see what your Resolve Studio key is anywhere. There’s no central database, there’s no “About” section for it, you either have it on a piece of paper or you don’t have it. Luckily I bought my copy online and had an email from the store with the code on my old machine, but had I not I’d be up a certain creek. Just figured it was worth mentioning.

Once I got the computer, I finished editing what I currently had on my plate and started looking for some gigs for which to test it out. I set up a few music videos (as well as some educational content) and shot them at 4K, wrapping them up pretty quick. No fuss. In Premiere at 4K, I had real-time playback at ½ resolution, without proxies, and in Resolve I was getting about 10fps with all my grades and OFX laid in. The rig was designed more for my 1080p work anyway, and even with effects, I was getting full-res playback in those instances.

In terms of performance, I was essentially able to do whatever I wanted whenever I wanted. By that, I mean I didn’t have to think “Oh I’ll have to denoise at the end so I can still preview at a reasonable speed” or anything like that. I just did what I thought I should do when it came to me, which was a surprising breath of fresh air. I could jump around and work on what caught my attention in that moment and keep moving at the speed of inspiration without any interruptions, which is something I quickly got used to and miss sorely now. In Resolve alone, things like Scene Cut Detect, Window Tracking/Stabilization, and Encoding were all happening at shocking speeds. Encoding was near-real-time in some cases*. Premiere let me front-load all the audio effects and lumetri stuff in that I wanted without slowing down, and the computer itself was absolutely silent which is huge. That fan hum can get incredibly grating if you hear it all the time, which is why I went water cooled myself. The Puget rig is air-cooled, as is traditional, but you wouldn’t know it.

I told basically everyone I knew that I had this thing and was looking for a challenge, so a few came to me asking for help with their projects, including one instance where I essentially had to rescue an entire project to make a festival deadline with a day to spare. There was a lot of rendering out versions of the 20ish minute 4K short film for the Director to look at and give any notes from a remote location, but each render only took a little longer than the film, maybe 30 minutes. If we were on my current rig it would absolutely have been in the “hours” category. Instead, we took care of that essentially overnight, and I was back to looking for more stuff to do.

This is when it had occurred to me that the thought of “oh I don’t want to work on XYZ, that’ll take forever and the return isn’t great” had completely left my mind. I had this rig for about two months, and I felt like I spent way more time either editing or working, but not a ton of time “waiting”. It felt more like sitting down and simply writing a letter than “booting everything up and launching a program and waiting for Peak Files to load and making a pot of coffee…” etc etc. Suddenly my computer was a workbench instead of a thing to be wrestled with. Again, it may sound stupid, but when your tools get out of your way, the work becomes far more enjoyable and much less daunting. I found myself wanting to edit because I was able to work at my speed instead of the speed of the slowest piece of hardware. I would argue that, for a certain level of work, the perceived cost of a computer drops dramatically when you’re able to get more done in the same amount of time with it.

As I mentioned before, Apple kind of just makes “one” computer that you can beef up, which costs an elbow and knee, but are ubiquitous in the creative space largely due to their support.

There aren’t a ton of companies building PCs specifically for filmmaker-types out there, HP probably being the main one, but if you build your own computer you’re your own support. Now, for me, that’s fine; I’ve been doing this for a while. If you’re not a complete nerd like me or don’t have the time and you just need it to be fixed, a Support team is vital. Puget has a lifetime labor and technical support, which also covers the installation of upgrades purchased through them. They’re also just up in Washington, so you’re not talking to someone reading off of a screen in the middle of nowhere you’re getting the people who actually built and tested your PC.

Moving along, how about rubber/road times?

As a test, I rendered out the timeline for Red Light by Tiffany Dunn, which I shot in 4K RawLite on the C200 for my review, on both my current rig and the Puget system. The Premiere timeline has a blanket Lumetri effect and some stabilization, to test a real-use scenario, and the DaVinci timeline is fully colored as seen in the YouTube video. The total runtime of the video is 3:06.

RENDER TIMES:

Using the YouTube 4K preset in Premiere–
Puget: 5’36”
Personal: 21’13”

Using the QT422 preset in Premiere–
Puget: 8’10”
Personal: 15’22”

Using the YouTube 4K QT preset in Resolve–
Puget: 9’58”
Personal: 8’53”

Using the YouTube 4K MP4 preset in Resolve–
Puget: 9’13”
Personal: 9’04”

So something to note here, at least for my computer, Resolve consistently renders faster. My guess is because Resolve uses my GPU, whereas Premiere relies mostly on the CPU. How my GTX1070 outperformed the P5000 in DaVinci was intriguing, and when I asked Puget about it they told me that, in reality, most consumer cards can, in fact, out-perform their workstation counterparts, it’s just that cards like the P5000 have more VRAM, can output 10bit signals, and are -in general- more reliable. In any case, since figuring this out I’ve kicked all of my timelines out of Premiere and in to Resolve to render, even if I wasn’t going to do much coloring. Really surprising considering all of the “nVidia + Adobe = Love” talk I see everywhere, but Premiere tends to ignore GPUs whereas Resolve will eat up every last inch of them so the performance difference makes sense.

Render times aside, working within each program was flawless, and I never once crashed or got hung up. While I don’t often crash on my personal rig, it does happen from time to time and will freeze up for a few seconds while it thinks about what it wants to do, which I didn’t experience using the Puget computer. It’s actually kind of interesting at how low-stress the two months with the Puget rig were, looking back on it.

I’d be lying if I said that small things don’t sometimes get under my skin (slow walkers!?), so having the thing that I sit in front of for hours on end on a daily basis become transparent was perhaps the most interesting if not the most relevant and important thing I noticed. Filmmaking can be a high-stress gig, and having things outside your control affecting your performance can be infuriating and can kick you out of your flow. You just kind of get used to computers being slow because, well, they always are. I didn’t have a frame of reference to what “actually fast” was because even my “very fast” current computer isn’t engineered to be the best, it was made to be affordable. Now that I’ve seen the other side, I get it. A fast computer doesn’t make you faster, but it does allow you to be.

That being said, if you’re just a solo producer of content and you’re not juggling clients or working with huge files, maybe building your own after a trip through Puget’s blog would suffice. Or, if you’re a production company or individual that primarily does web content or less-demanding editing, Puget also makes more modest PCs that would fit the budget you were thinking of spending on those Apple Trashcans.

For those who need to work at the speed of opportunity, inspiration, and crunch times, I can say with assurance that there’s a considerable tradeoff when it comes to saving money vs stressful man hours. Simply put, if time is a factor, get the faster machine. You don’t want to be in the 11th hour waiting 30 minutes for a 3-minute video to render just to watch it back for QC because you couldn’t view it at a reasonable quality level in the timeline. Nothing kills the spirit more than seeing a bad matte or something within 14 seconds of starting a video you just rendered, knowing that if it’s JUST that, you’ve wasted so much time on such a simple glitch. It’s much nicer to just go “oh, whoops” and have that thing fixed before you even get close to that point.

Momentum is huge in the creative world. For the two months I had this PC I was allowed to build up a lot of it, and I did. My turnover rate went up and I was more confident in my ability to hit deadlines that I set well ahead of time. Now that I’m back on my “old” computer, I do notice that difference in “flow”. It’s not earth-shattering, but it is something.

I will miss you, ridiculously overpowered computer.

FINAL THOUGHTS:

If you are looking to build your own rig, check out the hardware guides from Puget below to get an idea of what gear will fit your needs:

Premiere Pro Hardware /// DaVinci Resolve Hardware /// All Hardware Guides

– In general, I was informed that for Adobe, mid-range nVidia hardware is actually pretty good. AMD has released some really compelling cards, especially for Resolve, but overall nVidia will still outperform them.
– If you’re working with RED footage upgrade your GPU, for ARRI or H264 beef up your CPU, and for CDNG/After-Effects work, you’ll want high single-core performance but you don’t need the newest screaming-hot chip on the market.
– To find how fast your drives should be based on what footage you’re using, take the bitrate of said footage and add 50%. That number is the minimum speed your drives should be. With NVMe drives these days that number shouldn’t be an issue.
– For RAM, 32GB will suffice for 1080p footage, with 64GB being enough for 4K. Ballpark. Obviously, more is better, but not always necessary. RAM is like garlic: you definitely want some, but you’d always like a little more.

————————————————————————————————-
*I wasn’t able to test it on my personal PC because I forgot to save the DaVinci Database file (so frustrating, by the way, Resolve not having regular old project files) but my 2:23 Reel took 2:34 seconds to render on the Puget rig. Outstanding.

Categories
Post Production Production

Exploring Ultrawide Workspaces – A Review of the Samsung CJ791

For editing, the size and layout of your workspace is nontrivial. Most would argue doing so on a 17” laptop is an enormous pain and having an external monitor is a necessity. I’ve built all of my computers so I’ve long had a choice in monitoring, upgrading when technology warranted it (the jump from 4:3 CRT to 16:9 LCD was memorable). I’ve had the good fortune of adding some real winners to my setup, like the BenQ 27” I reviewed at the beginning of the year, and recently Samsung sent me their 34” CJ791 Ultrawide Curved monitor to check out. This article will be be half review/half discussion about workspaces, monitors, and what to look for when adding them to your workstation. As an ethics statement, Samsung let me keep this monitor but haven’t compensated me in any other fashion and have no input on this article; the words are my own.



To start, the 34” Samsung CJ791 is an Ultrawide QLED Curved Monitor designed for “creative and business audiences who seek a comfortable and efficient work experience” compatible with both Macs and PCs. The highlight feature, beyond the display, are its two Thunderbolt 3 ports that transmit display, data at 40Gbps, and power at 58W all at once and in one cable.

If you’re unaware, Thunderbolt 3 looks like USB-C but it isn’t. Long story short, like MP4 the shape of USB-C is a container, but the cables within it can vary. Usually, you’re getting USB 3.1 or Thunderbolt, but you can even put USB 2 in there so make sure that you’re using the right cable for the right job. You’d hate to expect the full 10 Gbps transfer speed of USB 3.1 Gen2 only to get a paltry 480 Mbit/s.

There are a few things to consider when setting up a new system or adding new monitors to your existing one, but the big two are size (both physical and resolution) and color accuracy/image. There’s kind of a strange tradeoff between resolution and size that I’ll try to explain succinctly but even I’m having trouble articulating where the various “sweet spots” are.

SIZE: PHYSICAL – It’s Big, But Not Huge

I can confidently say that larger is better for editing. Aside from having more room to arrange your workspace, you also have a larger screen for which to view your content and give closer scrutiny. Kind of obvious. In my experience, a traditional 1080p 24” monitor is absolutely as small as I’d be willing to go, with a 4K 27” as a nice sweet spot, with the low-30”ers being the likely upper limit of comfort/desk space.

I’ve always worked with two monitors, and before adding the CJ791 they were a 24” Dell Ultrasharp and the aforementioned 27” BenQ. I’ll talk about their physical layout in a bit. In the last article, I said 27” was probably as big as I’d want to go and while that’s still true (maybe a tiny bit bigger), it’s important to understand how an ultrawide differs a bit.

While the Ultrawide is “larger”, it doesn’t replace a two monitor setup at this size. The 34” CJ791 -in regards to physical size- is generally an upgrade from any single traditional monitor under 27” (resolution aside), but its wideness doesn’t replace two of them as it’s only a few inches wider. In this case, I’d still want a second, and from an editing perspective, I’ve settled on having a vertical one to the left as the best addition. The workspace in Premiere looks like this:


I like this a lot. However, this is where I waffle on Ultrawides a bit. If you are starting from scratch and only getting one panel, I’d possibly get a bigger one like the 49” CJ89 as it’s like having two 27” 1080p monitors side-by-side except without the stupid bezels in the middle, which is perfect.

Most people are used to that type of setup and with some creative workspace arrangements, you’d be off to the races. Having the tertiary vertical monitor would almost be redundant but honestly, I’d still use it. That being said if you’re supplementing an existing setup the 34” would likely work just fine, such as in my case.


Where my situation is somewhat unique is that I now have three monitors: the 24” is vertical to my left, the CJ791 is front-and-center, and the 27” is mounted above the Samsung. I actually really like this setup, even if it took some wrangling with the Lords of Amazon to get the right mounting situation figured out. I only have so much room on my desk and to get one monitor over the other I had to go for a pole-based desk mount. Let’s just say it can be tricky if you don’t plan ahead.

To that end, I will say that the VESA mount adapter on the CJ791 is a little big (it doesn’t attach directly to the monitor as it’s a curved surface), adding a few inches between the back of the monitor and the mount. This may or may not affect your setup.

The stand that comes with the monitor is pretty big, but I doubt it could be smaller just due to weight distribution and other science-y things. I have a shallow desk, so going for the pole mount was necessary as my monitor stand had to be pulled a little too close to me to accommodate the back of the stand. Not a big deal as I wanted to make this change months ago anyway, the pedestal was a nuisance no matter where it was.

SIZE: RESOLUTION – It’s in the Middle

This is where things get a little murky, but I’ll do my best to throw some purification tablets of truth in there for you.

With all three monitors in play, I have Premiere set up thusly:


The left monitor has all the audio, timecode, and bins/effects/effect control stuff, the Ultrawide just has my timeline (this is fantastic by the way) and the upper monitor has my source/program, scopes, and a footage bin. Having the timeline alone on its own monitor is actually really pleasant, and probably the most enjoyable thing I discovered using the CJ79. On top of that, I can fullscreen the program display and have a nice big image to look at during playback on its own dedicated monitor when I’m done pulling clips (when the source monitor will be fullscreened).

When you’ve got the entire timeline laid out in front of you, filling your vision completely, zoomed in so you can see all of the thumbnails and nuances in the waveforms, you kind of lock into what you’re doing as if you were tinkering away at a physical workbench. That’s the best I can describe it. I found myself having an easier time concentrating as I was less distracted by the rest of the workspace (or my desk), and being able to compartmentalize things by-monitor gave an interesting mental boost as well; I only had to look at the video when I was scrutinizing the image, I only had to look at the data monitor when I was making adjustments, and I spent most of my time head-deep in my timeline. I will often “Radio Edit” -editing to the audio- so not getting distracted by the pretty pictures and being able to focus on the big waveforms was a plus.

In terms of raw resolution, that can be best shown with a screenshot. The image of my workspace above has been adjusted to look nice for demonstration reasons but when I take a screenshot and just paste it in Photoshop, it looks like this:


In the photo of my desk from earlier you can see that this doesn’t accurately represent what is physically happening but it is what size everything is “technically”

So here’s where the murk and tablets meet. The 27” monitor is 4K, the vertical one is 1080p, and the Samsung is 3440×1440; somewhere in the middle. Due to the difference in scaling, the CJ791 sometimes “feels” smaller than the 27”, even though it’s physically not. It’s kind of a weird optical illusion and for me doesn’t really make much of a difference as I’ve got a lot of space with all the monitors here, but as you can see there’s -to some degree- more room to work with on the physically smaller 27” due to the higher resolution. The wide shape does however offer a better timeline experience which I have really taken to, and when you watch anything fullscreen you can’t tell the difference. Generally, I find myself using the vertical monitor in conjunction with the Ultrawide the most, but in Resolve that upper display is necessary as it’s connected via a Decklink 4K and used as a Reference Monitor (I know it’s not a Flanders. I know.). The setup, size, and configuration of your monitors really do depend on what you’re doing.

The 49” CJ89 from Samsung is 3840×1080, so while it’s functionally two 27” monitors side-by-side, they’re still 1080p. While many people are editing exclusively in 1080, this may cause some editors to take pause, but really it’s more of a workspace consideration than a monitoring one in my opinion. If you’ve never used a 4K monitor you wouldn’t notice the difference anyway and it would for sure be an overall upgrade.

All that being said, I’m being really pedantic. The CJ791 is a business monitor and isn’t even meant for video editing specifically (is anything?) so the fact that it’s 95% of the way there is pretty legit. Again, if it were to be your only monitor, Premiere would still look like this:


That’s a very comfortable work environment if you ask me.

SIDEBAR – Auxiliary Uses

This thing is incredible for gaming. Seriously. I’m destroying in Apex Legends right now and I’ve taken my game from 8 to 11 in Rainbow Six: Siege. 100fps is noticeably better than 60fps and having your peripherals filled makes everything feel so much more natural and you can perceive movement a lot more easily. Plus, since the resolution is high but not too high, you get a nice balance between graphics and performance. Like I mentioned with the Premiere timelines, having your head “in there” makes it way easier to focus.

COLOR/IMAGE – Pretty Good

The Samsung has a Quantum Dot VA panel, which they refer to as QLED (not to be confused with LG’s OLED which is a different technology) and comes in at a respectable 3,000:1 contrast ratio and 4ms response time at 100hz while covering 125% of the sRGB color spectrum. I also find it’s pretty sharp and I can’t see any real rasterization or anything on edges of windows or what have you. It looks clean.

After calibrating all three of my monitors together, I found that the Samsung was still a little red which I was able to correct out with the on-display menu pretty easily to where all three matched. I don’t quite know why this is the case, especially if it’s been calibrated, but as I’m not doing my color work on this display it’s not a huge issue for me, but for you, it very well might be. Again, it’s not meant for color work. It’s a business monitor.

Another thing to consider is that, as the monitor is curved, your angle of view can somewhat affect the appearance of colors/contrast in some areas and not others simultaneously, and reflections from lights directly behind you may be distracting in low light situations. It’s a matte screen and hides reflections rather well, but you can still see a soft glow that’s sort of “focused” into your face due to the parabolic nature of the screen if there’s a light right behind you. During the day you don’t notice but at night you can tell. I just turn that light off, other lights exist.

Long story short, you shouldn’t be using a business monitor for color work so this section doesn’t really matter, but in any case, watching content on it is quite nice and I don’t notice any biases in any particular direction when I’m not actively scrutinizing it. Resolve also scales really nicely so since you’ll be evaluating your image on a reference monitor anyway, the extra room is nice.


HARDWARE – Fantastic

The CJ791 is the world’s first Thunderbolt 3 QLED Curved Monitor, resulting in an amazing one-cable solution if you’re set up for it. With just the one cable you transmit the image, power, and data at 40Gbps which is 8x faster than USB 3. Unfortunately, I do not have a Thunderbolt 3 port on my PC, so I had to power it traditionally (damn you two cable solutions!). It comes with a barrel-type power cable with a brick, which was somewhat surprising as I just assumed every monitor used that same rectangular one. It’s nothing important per se, but it’s something to note. The monitor is sturdy, feels well constructed, and has a super thin bezel, which is almost a requirement of monitors now. Physically, the monitor is lovely to look at.

CONCLUSION – A Good Addition to Your Setup

I’ve got such a weird opinion here but I’ll try to break it down:

This is objectively a nice monitor. There’s no real reason to not get it if you’re in the market, with some nitpicky, min/max-y caveats.

For editing, it would be a great supplement to your existing monitor, or an amazing secondary for your Thunderbolt-equipped laptop as this is essentially a wide 27” 1440p monitor. I feel like it’s necessary while editing to use the real estate of two monitors if for nothing else than organization, so if I were in the market for two monitors I’d get the 49” CJ89 (or similar) so as to get the workspace of two full 27” monitors, even if they’re only 1080p, but for just having one the CJ791 is a great choice.

For coloring, you’d obviously have a Reference Monitor set up so this would automatically be your secondary (or primary depending on how you think about it), and it does quite well in that position. Resolve scales nicely and honestly without being able to arrange your workspace like in Premiere I often find myself using the single-monitor workspace over the dual-monitor one anyway. With two monitors, the Scopes panel gets relocated away from the image/adjustments panels (why!?) and all 4 scopes are displayed with no way to turn off the ones you don’t want, so having the extra real-estate with the CJ791 is nice.

For day-to-day stuff and gaming, it’s great. At just over $800 it’s priced a little higher than I’d expect but I also can’t use the Thunderbolt features so that’s something to take into account. It may not be 4K but it does have that 100hz/4ms response time going for it which is awesome.

Desktop video cards don’t output to Thunderbolt, leaving that to be handled by the motherboard or a dedicated card, so if you were to go that route the display would use your integrated graphics. I know for PC, you can have both your integrated and discrete graphics cards running at the same time, and I assume when you render things out Premiere/DaVinci would use your GPU to do so, but I wonder if or what that situation ends up looking and working like. I’d love to test it. An article for another time I suppose.

Overall, this is a great monitor. Like I said above, I tend to use it in conjunction with my vertical monitor the most (even when editing), as it seems to be the ultimate productivity setup for me so if you’re in a situation to pull that off I highly recommend it. With 3 monitors you’re in Fantasy Land. Plus your desk will look like a station on the Death Star which is cool.

 

Categories
Production

Zeiss opens a new space in Sherman Oaks


Like
Sigma and Canon before them, Zeiss has opened up a showroom/workshop hybrid space, this one located in Sherman Oaks, chosen for its central location for the majority of the Los Angeles area. The space is designed for anyone interested in shooting with Zeiss lenses on their next project to come in and kick the tires on whichever line they were looking at. After setting an appointment, obviously.

The cinematographer-focused space has a “rental house” area with some cameras available to test the lenses with like the ALEXA SXT, RED Monstro, and EVA-1, a lens projection room to get some more technical information about your lenses of choice, a mini-theater with a connection to the editing/color bay, and a little display area out in front with Zeiss’s current lens lineup as well as items from the company’s history such as old microscopes and cameras they’ve made.

After checking out the mini-museum in the front, you may head to the left to the tech area, past the display case with all the Zeiss options available in the cine realm. This room is a well-lit space centered around a camera tech bench aimed at a focus chart. DPs are invited to bring in their camera, or use one of theirs, and see how the Zeiss look fits your project’s style. In addition to the chart, there’s a couch off to the right near the wall of windows placed intentionally to allow you to film a person (arguably a better way to test a “look”).

From there, you can head to the opposite side of the office to the post-production room where you can see what you shot, evaluate the image more closely, and even project it in the aforementioned mini-theater (which also has a nVidia SHIELD connected for accessing the usual batch of streaming sites and apps). The post room has an nVidia-based PC running the newest GPU rated to play back real-time 8K RED footage, a dual-screen setup, as well as a 4K HDR Television for pixel-peeping and the usual batch of software like Resolve and Adobe.

Next to that room, a lens projection area containing a meticulously-flattened wall to project on to via their GECKO PRO system, which can test PL or EF lenses covering up to the largest available formats. For the most technically-minded cinematographers among us, this room allows you to get down and analyze the exact characteristics of any given lens and compare them. You’ll be able to see exactly where your falloff starts, exactly how sharp the lens is, exactly how much resolution it can… resolve… any lens nerd’s dream.


It’ll also give you a lot of important subjective information about your lenses which can be used, for example, in post if you need to replicate some of the characteristics of that lens on to other footage. A more exact method would be to use Zeiss’s in-lens metadata.

The tour of Zeiss’s new educational space was largely taken up by Cinema Sales Manager Snehal Patel demoing many of the features of Zeiss’s lenses, sort of as a demo of the service the space offers itself, which I thought was nice as I learned a lot. For instance, their lenses generate a relatively industry-standard Cooke/i metadata that generates a sidecar file that you can link with Silverstack, showing you all kinds of stuff including the exact distortion and shading characteristics of any given lens at that focal length/focus distance at that given frame of the recording. Again, post houses would much prefer that to a handwritten notepad that says “Take 3, 32mm, rack focus shot” or something like that. Testing these things ahead of time is a necessity. While a rental house can help you on the gear front, this Zeiss location is intended to allow you to take a couple hours and test every facet of your shoot. At least if you’re thinking of shooting Zeiss.

While the space is open to anyone wanting to check out what Zeiss has to offer with an appointment, it’s not generally open to the public. To fill that void, they intend to have educational events, meetups, seminars and the like open to everyone else. As I left the space I heard Snehal say they had 700 people RSVP’d for their first one, which should be cozy.

Categories
Featured Production

Are Canon Cinema Cameras the Ultimate Documentary Solution?

This year all 5 documentaries nominated for an Oscar were shot on Canon cameras. To further discuss this accomplishment, Canon gave me the chance to interview Tim Smith, Canon’s Senior Advisor for Film & Television. I took the opportunity to do so and derailed the conversation at every turn, reminiscing about “the day, back in”, the state of the industry, and getting shooting tips out of him. This interview took place before the Oscars, so we didn’t know who won yet, but an interview from Brian Hallet with the winner Jimmy Chin can be found here.

Tim’s job at Canon is unique. He embeds with higher-profile productions and works with Cinematographers to aid in productions using Canon cameras and make sure the production doesn’t face any hiccups. “Face of the product” type stuff. He is also an associate member at the ASC an and as such has a wealth of knowledge about imaging as a whole.

 

Kenny McMillan: Getting to hang out on set all day doesn’t sound like a terrible gig.

Tim Smith: Oh yeah it’s a gift, I wouldn’t trade it for the Presidency of this company, I’ve got a better job than he does. Best job I’ve ever had.

K: How did you land that?

T: 30 years ago there was really no experts in electronic imaging. My background was still photography, and my passion was audio, so video cameras come out and they’re kind of still cameras with tape recorders attached to them. That’s about as close as you can get to an expert 30 years ago, truth be told I probably wouldn’t get the job these days I don’t have the background for it. Back then there wasn’t a whole lot of people doing that kind of thing and the position was with Canon came up, and a rival company at the time, I interviewed with both of them and luckily I took the Canon job and then 30 years later I’m talking to you.

K: So looking at the five documentaries nominated this year, it looks basically all C300…

T: It’s huge yeah there’s some 5D stuff in there as well, but everything is a Canon Image which is huge for us. It sort of came as a shock when we figured it out, it wasn’t like somebody told us. The Canon company kind of worked it all out, and I mean we knew some of these Productions because we had involvement with them like Free Solo and RBG where we had worked with the cinematographers and answered questions or helped them pick out cameras, things like that, so we had some relationships directly with some of the projects and the others not, so to find that we hit the trifecta is a big deal. We’ve sort of been building to this although that wasn’t an expectation, it’s not the first documentary to be nominated, or even to win, with a Canon product but the first time we’ve had 5 for 5. I mean if you go back to 2002 there’s a film called Spellbound by Jeffrey Blitz about a kid who competed in a spelling bee, which is a documentary shot on an XL-1 to Mini DV tape. I remember when that got nominated they called and said “we got nominated for an Academy Award” and I thought “from our camera?” I was in Japan actually, we were developing the XL-2 at the time, and we were able to say to the engineer that designed it, you know, “a film shot with that camera you designed just got nominated for an Academy Award” and I remember the look on his face. He’s long since retired. We’ve always done pretty well in the doc category, I think 2002 it was more about the image was good enough that it didn’t interfere with the telling of the story, whereas most video cameras up to that point looked like home movies the audience may not have accepted them as something they should pay $6 to go see. I remember back in those days the big deal was how do you get Mini DV tape transferred to actual film so you can submit them for reviewing at festivals, because you know that’s the only way you could do them back then.

K: Oh yeah I forgot about that!

T: It was huge! Trying to figure out how to do the pulldowns, and how do we match up audio, and how to get it to film… luckily that’s not even an issue anymore but that was a big deal back then.

Documentaries are my thing. I mean, I get to work on TV shows, I get to work on Feature Films, but I’ve always had this love of documentary. So in this particular case, this particular idea that there are five docs that we have something to do with is a huge source of pride for me and the company. I just think documentaries are sort of special. They change things. They don’t just entertain you. It’s not just a fun evening; you walk away smarter than you walked in and I think that’s kind of important. They can change the directions of things like elections, things like diseases, our perception of the world, so it’s an important category. I’ve worked on docs as well, independents, and they’re a lot more work than people can imagine. They may not look like A Star is Born but boy they are a lot of work. There’s a lot of devotion there and never any money.

K: I feel like people like Bourdain probably had a hand in pushing the envelope visually for the doc world. His later stuff looked incredibly cinematic [shout out to Zach Zamboni].

T: Oh sure, but you can also look at that production and go “wow look how much money they have” when most of these people don’t have a dime ya know? There’s always a shoestring, there’s always somebody else’s budget. I mean with Spellbound the guy who did it has gone on to have a really great career. Television, Features, all sorts of stuff, and that was his first thing! First thing he ever aimed a camera at got an Academy nod. He didn’t win, he lost to Bowling for Columbine which isn’t bad company to be in, but that was kind of the time when documentaries started to find an audience. Back then you couldn’t see them anywhere, there was no place to watch them. I remember a few years later there was March of the Penguins and I had a kid at the time who is hugely into Penguins so we had to drive 60 miles to find a theater to go watch this documentary but it was finally showing in a theater. Nowadays you flip on Netflix or something and Free Solo and RGB are right there. There’s an easy way to do it which is important.

K: Oh for sure, but that being said there’s something important about going to the theater and making a day out of it, no? Like, that day’s experience is more memorable than if you just flipped on Netflix.

T: I mean my kid was still sitting on my lap at this point, now he’s six foot one or something but yeah finding a theater in San Diego when we lived in Orange County, making a day of it and having dinner and going to the movie… it was a really cool experience but I live in California. If it’s hard for me to find a screening back then how did Kansas do it? How did the Midwest do it? Now I think it’s different, we can all see them there’s a great audience for it.

K: Definitely, and I think people in the information age are really hungry to learn new things.

T: I think so too, this whole age has changed. We have all this podcasting, all these ways to go deeper than the 6 o’clock news, than the 45-second blurbs about what’s going on in the world and then on the next story. You can dig deep into these things. But now they’re also making money I mean, you have the “Supersize Me’s” and the Michael Moore films where now people know the names of documentary filmmakers, you know? They’re famous directors but they’re doing docs and when they make a movie, those movies make money (which, obviously, it is Show Business).

The winning film last year was Icarus, which was shot on Canon cameras, and the year before that it was the OJ Simpson one, “Made in America” which was shot on Canon, so technically not only do we get all five in the category this year, we’re guaranteed our third year in a row for best documentary. As somebody who has to sit there and kind of watch them and pull for a winner I obviously have always pulled for the Canon project, but this year I’m kind of torn because I can’t eliminate anybody from the competition. I won’t pick a winner but I’ll tell you, Free Solo has gotten more attention than probably any documentary we’ve been involved with. RBG as well both of those got real attention. These are people who watch regular movies but they know these movies too so it’s a big deal.



K: I still haven’t seen it but my friend was saying the same thing. Must be crazy to be a camera guy on that [Free Solo] project.

T: Well the climber is also shooting most of it, the climbers are all shooting.

K: Oh no kidding?

T: Yeah there was a lot of prep involved in that. Jimmy Chin is also doing a lot of the shooting so they’re shooting each other, they’re shooting up and down… there’s an arc to that story where, if you don’t know what’s going on, you wonder if they’re going to survive. Now since we know the people made it that didn’t work for me, because I’ve seen them since and we would have heard about it, but it’s spectacular looking. From a cinematography award standpoint documentaries don’t get considered but boy this one really kind of nailed it. You can get Best Documentary, you can’t get Best Cinematography in a documentary. If you could I would easily give it to Free Solo.

K: So why do you think everyone picked a Canon over, say, the FS7 or something like that? I know for me, one of the reasons I picked the C100mkII over other offerings back in 2016 was the fact that I could dress it down or build it up as needed, but it essentially did everything I needed right out of the box with minimal accessories.

T: I think that’s the thing now, when we introduced to C300 there weren’t a ton of people playing in this “reasonably priced, high-quality image” market, but now we’ve got Black Magics and we’ve got, ya know, the FS7… Sony didn’t have a camera back then, and even GoPros are looking pretty good these days. There’s a lot of choices, so to get 5 for 5 now? I mean 7 years ago it might have been a lot easier. Now they’ve got a lot of choices and a lot of affordable cameras in that price point.

On a theatrical feature film, a lot of times you have more money than time and on a documentary, you have more time than money, and money always plays into the doc world, and then size. I mean, if you’re going to pull yourself on a rope you’re not doing it with a 25lb camera on your shoulder up the side of a mountain, but none of that works unless the picture quality is there, that the image is good, and that’s what I think the difference is. You know that C100mkII that you have is a great value and also delivers a great image that stands pretty favorably against cameras that cost a lot more money. You also, if you’re in the EF-mount, have a lot of lenses you can choose from that won’t break the bank because of all the still lenses that will go on there. They’re not cinema designed, so in some cases theatrical cinemaneeds needs a little more cinema style lens, but in the doc world they’re perfect, even down to autofocus if necessary. Autofocus has come so far in the last 10 years that it actually really works now and can be used a lot where you wouldn’t dare do that at any kind of level of film 10 years ago. Now you can do it and worry about getting the shot. and yes they work when you pull them out of the box. When you buy a Canon camera, the charger’s in there, the battery’s in there… you got to buy a lens, and you’ve got hundreds to choose from, but a out of the box a Canon Cinema Camera like that C100 is ready to go. You’ve got an image on the same day.

K: I actually remember the first Canon video camera I got was the XL2 because I saw 28 Days Later and thought “surely if they used this camera, I can make something with it.” [laughs]

T: Oh my God yeah you know that was a big turning point too, that was quite a movie to be shot on something like that. It really really worked when they did that, it was Danny Boyle I think. That was quite the thing. That sort of sparked this other film Soderberg did called Full Frontal, and I got to work on that for the entire time, and nobody ever saw the movie but it had a great cast: Julia Roberts, Brad Pitt, David Duchovny, shot by Steven Soderbergh in like one month… but 90% of it was shot on the XLs, but that’s when you still had to figure out how to transfer to film, shooting on PAL for the few extra lines of resolution, it was a whole different thing. But that movie got a lot of attention, “28 days”, and it drew a lot of attention to the camera theatrically, that may have been a big start for us.

K: Did that cause you guys to start working more towards cinema cameras? How many XL features made it over to the C-Series cameras?

T: The idea of interchangeability in a cinema camera thing came from the XL. When it came out, it really wasn’t a camera that Canon was all interested in putting out. We had never done an interchangeable lens video camera and we didn’t have a line of lenses for it. We ultimately only produced three maybe four lenses for it. We even built a 3D lens for that which didn’t go anywhere for other reasons, but there was a lot going on there. I mean, first and foremost we’re kind of a lens company, and to have cameras that you have put lenses on is a good business model. One’s the razor, one’s the blade, and the profit’s in the glass. When you buy glass it lasts forever but you buy a body… how many cameras have we made between the XL1 and now? When you’re a camera company that doesn’t make glass, I’m not really sure how they do that, where the money is, cuz you spend a fortune to pour a mold and build a new camera and build new features and then somebody decides to go to 8K and you’ve got old cameras again, but the glass lasts.

K: Why do you think the C300 is the go-to over other C-series cameras?

T: Yeah it’s still a pretty popular camera. We’ve evolved obviously to the C300mkII, which had more to do with 4K becoming a standard, but shooting in HD even in a documentary can hinder you in getting it sold, so people are moving over to the Mark II.

I think if you asked most of the cinematographers they would tell you color science. The color Imaging from Canon. That comes from our stills side. They’d say we have spectacular color science, it’s a beautiful image. If you talk to the producer they probably say “Well it’s reasonably priced” but nowadays not so much because everybody is reasonably priced! The Black Magics are reasonable, even the REDs. You still have the outliers like Arri which are getting top dollar and deservedly so, but in the doc world you’ve got a lot of choices in that price range that you can afford, you know the fs7 and so on… It’s the colorescience I think. It’s a very natural video look I think. There’s also one sort of feature, intentional or not, that’s in the Canon design that didn’t start with the XL but it did start with the C300, and that’s the way the sensor is created. They were trying to lose the concept of fixed pattern noise, which is what you see you in a typical video style camera, like news and things like that. There’s noise in the sensor, and every sensor has it, and it’s always in the same spot; it’s fixed. Where on the C300 it’s random. Every camera has a certain level of noise, that’s just part of it, we all see it and it’s not necessarily a bad thing, but when it’s random looks more like film grain. I think there’s a psychological edge to that in the Canon sensor, it’s giving you that. The pixels are filed fired differently every frame, scanned off differently so that every image you don’t have that noise still sitting in the same place which I think people pick up on, on a subconscious level. It has much more to film grain look to it,

K: On the subject of the “Canon look”, I’ve heard from multiple people (usually in forums online) that the look of Canon cameras is “just a boosted red channel.”  Does that sound right?

T: Oh weird, I think we’re warmer actually. I actually tend to think Sony goes to the red side, but I mean our color science was evolved over the stills world. That’s where it’s coming from, and on the stills side we target flesh tones. But when you push the reds it looks really video-y. I would say we’re just the opposite of that, no one color just pops out at you and our flesh tones are really always pretty accurate. We also do an incredibly good job in the low light area by design, which is different than other sensors. Our sensors roll off faster -we overexpose faster than most- so you got to be careful on the top end, but we’ve got tons of bottom-end. The low-end on Canon cameras, in general, is spectacular and you just tend to want to protect your highlights, be really careful that and then let the bottom fall where it is on a Canon sensor. I think latitude and colorescience are the strengths. Our colorescience is very natural, very much like our still camera designs.

K: Yeah I’ve been able to pull amazing stuff out of the shadows on my C100mkII.

T: We even have a camera called the ME20 which is a specialty camera that goes up to 4.5 Million ISO, so when it comes to shooting in the dark nobody beats us [laughs]. With documentaries, you don’t bring lights. Having good low light performance in the doc world is critical. You’re in a bar, in a restaurant, you’re in an alley, a war zone… you’re not bringing truck lights and you’ve got to consider that. I think the low light performance is great. Now, Free Solo? Lots of sunlight but there are dark scenes. If you’re interviewing Ruth Bader Ginsburg and you’ve got 20 minutes from setup to leave, you’re maybe opening a blind and turning on a practical light as the way to go.

K: To that end, do you have any practical advice for people shooting on Canon sensors?

T: When it comes to exposure, I mean obviously like I said you protect your highlights and let the bottom fall where it goes because you’ll be able to pull out or crush the details in the blacks, it’ll be there. Just protect the highlights don’t overexpose. We roll off very fast. You look at the Arri sensor, it rolls off very slowly meaning it takes a lot to overexpose an Arri, but the detail on the bottom end isn’t the same. So in our case, you think the reverse of that: you protect for your highlights go with the lows.

Also, these cameras can record in different bit depths relative to resolution. Several of our cameras have the ability to shoot 2K 12-bit as opposed to 4K 10-bit so you do sacrifice resolution for bit depth but bit depth is where your color is. So you have to figure out what you’re thinking, you know? Can you get away with 2K for distribution and bring more color into it? The C200 raw feature is spectacular both when it comes to color, especially, it isn’t just about resolution, you’re getting all that color information off the sensor when you’re shooting raw. That’s a very practical camera to shoot raw with when raw isn’t that practical on bigger cameras. The C700 raw is a very expensive proposition, for instance, because you’re shooting a much more expensive media, is a lot more data, whereas you can have a sort of compressed raw on the C200 which works really well.
Unlike the days of the XL-1 where you shot everything in what’s called 709, a standard TV look, you’ve got different logs to choose from. So are you shooting Clog2, Clog3… you can create custom logs, which all deal with the amount of color information coming off that sensor so you do your prep work. The only universal truth is protect your highlights [laughs]. Everything else is about how are you going to finish the footage, how are you going to grade it?

Trust the image though, it’s there. That sensor’s giving you everything you can possibly use later on so it’s there. There’s a lot of times our people have shot, they get back to post and go “wow I didn’t even see that cable in the corner” and then it’s there on the footage. There was one film we worked on, Amityville: The Awakening shot by Steven Poster, I remembered one shot where he was shooting a lake in the moonlight. He loves to shoot at around 3200 ISO, that’s his look. He shoots way up there and he gets a great look out of it, but that’s where noise starts to get introduced in these cameras. So I’m watching the monitor pretty intently to see if maybe he’s pushing it too far, and I start seeing what I think is noise on the lake and I was like, you know, “you may have to crush this later, I don’t know what you got to use” but when we get to post he finds out it wasn’t noise. The lake was like glass, that was the reflection of the stars,  all that detail was there. It was pretty crazy, they ended up keeping that shot in the film.

So the sensors are pretty amazing, especially in low light. So trust the low-light, watch the top end, and then think about your color. How you’re going to get it there. That’ll determine what kind of log you want to shoot in, how far you’re going to go. And do tests. You don’t have to do it every time, if you become a filmmaker who says “Canon is my camera” you get to know how the camera will perform so you won’t be surprised, but if you’re coming to Canon for the first time or you haven’t shot it much you need to test it like you would need to do with any camera that you’re not used to.

K: Tim thank you so much for your time and congrats.

T: My pleasure, thanks.

Categories
Adobe Featured Post Production

After Effects wins an Academy Sci-Tech Award

David Simons is a 28-year fellow at Adobe, if you count the first three years at CoSA where he created After Effects. His creation has dominated the post-production world since its inception and has been the crucible for compositing for thousands of productions from indie one-offs to the biggest films and television shows around. The Academy Sci-Tech award was given to After Effects and Photoshop this year in recognition of those contributions to film and TV, and I was given the opportunity to speak with David after his win.

Kenny McMillan: How did Egg become After Effects?

David Simons: Egg was the code name, short for Eggroll. Our code names were taken from the menu of an Asian restaurant in Providence Rhode Island. You may have heard me reference Chicken Bee Boong in the speech? That was also from the same restaurant. That was kind of like a staple for us: we would go to the restaurant (called Apsara) and get like 7 Chicken Bee Boongs and bring them all back, and then after we would sit around at the office on these couches eating the food from Apsara, and then afterwards we’d all just be laying there and we called it “Getting Chicken Bee Boong’d.” Those were good times. We are all just right out of college, but After Effects was our last-ditch attempt to save the company.

KENNY: Oh wait really?

DAVID: The company was founded by me and three others and we were trying to be a hypermedia publisher. This is before the web, so you can think of it as a magazine publisher like National Geographic but on CD-ROMs because that was the only way to distribute rich information on a computer back then. So to make a long story short, basically it just failed completely. No one was interested in the content we were trying to produce, we couldn’t even give it away for free, we tried to sell advertising, then we tried to give the advertising away for free and that didn’t even work, no one even knew what it meant! Like “what’s a multimedia advertisement?” Nowadays it’s just what the whole web is every day, but it’s hard to start something new when people don’t have a framework of what it is.

But, one of the tools that we created for ourselves was called PACo Producer: the PICS Animation Compiler. It was a tool for us, but people found out about it because it did streaming animation from CD-ROM, unlimited length, with synchronized sound. That was something people wanted and they started coming to us saying “hey yeah we don’t want your stupid magazine but we’ll buy that!” so we just became a technology company. We did that with streaming animation but then the digital video revolution started taking off and it so happened that the code that I wrote for this animation system worked for digital video with no changes at all. It was lossless compression, so the files are fairly large, but it looked great and it was fast. We were in a really good position at the beginning of the digital video revolution and people started using PACO for digital video streaming playback. Also it was unlimited length, which helped. It was the only system that could do that and keep synchronized sound. And it was cross-platform.

Then Apple came out with QuickTime and we thought we were just going to get destroyed because we couldn’t compete with Apple, obviously. They’re a much bigger company and Quicktime’s design was just better than the PACO’s design. PACO hadn’t been designed for video it just happened to work for it. So we figured we had six months left of revenue to survive and that’s when we started working on After Effects.

K: Great case of right place right time eh?

D: We were very lucky. I liken it to surfing: we just caught a big wave and we’ve been riding it for 26, 27 years now. [laughs]

K: I was actually going to ask something like “did you feel like you were on the edge sort of leading the pack or riding a wave of demand?”

D: I would say the whole time we’ve developed After Effects it’s been in close collaboration with the customers. By sitting down with them, talking to them, observing them, and sometimes they’ll even put us to work with what we call “embedding” where we will just go to their facilities and we will just be a worker bee, working on their projects so that we can really understand the problems that they’re trying to solve and what was happening in the workflow. That’s how we have determined the features really from the beginning.

We started… I think there was a Mac Week article on people using Macintosh’s for video post-production, so this must have been ‘91 or ‘92, and some of those people were like Chris and Trish Meyer, who ended up becoming our beta testers,writing books on After Effects, and they’re still writing articles on After Effects… Harry Marks, one of the leading Motion Graphics creators in the ‘80s, way before the desktop revolution… We just cold-called these people and they were very kind and helped us. So from the beginning it was a collaboration, adding things based on feedback from people who are actually using it. It also helped that we’d been trying to do some of our own productions and were also a customer, but we had no professional video or film experience. For us it was kinda just multimedia and animation stuff.

K: I feel like getting an award like this should have happened earlier though, I can’t think of a more-used effects program.

D: I’ve have a number people say that it’s long overdue so I think we probably could have qualified for it earlier, but from the Academy’s point of view it’s all about After Effects’ impact on film specifically. We didn’t create After Effects specifically for film but it was involved in film right from the beginning, from the early beta testers who were using it for Film Production, so that’s been an important component the whole time. I believe this is the first time that the Academy has ever had a Motion Graphics category. That obviously helped us because After Effects is so clearly the leader of the category. Usually a tool that is general purpose is not an advantage, it’s a disadvantage. The Academy wants to promote tools that are solely used for film, but After Effects has had such an impact on film that they still thought it merited the award which is great and we appreciate it. At Adobe, film is not our sole thing. We have customers in many different segments, but we did always consider film to be our highest-end. If we catered to the film customers, everyone else would aspire to do that kind of work and in that way our film customers have always been our our guiding star.

K: Do you remember any particular impactful moments during the development? I assume Jurassic Park was huge.

D: I’m thinking all the way back to ‘92… Are you a programmer by any chance?

K: Not in the slightest [laughs]

D: We just type code, numbers and letters, into a text editor and then compile and run it and start to play with it and I remember the point in which the layers were things that you could click on and select and move around. It suddenly really felt like “wow, this is actually a thing that you can interact with” and that’s really the moment it seemed like it was alive. You could bring layers into a comp and you could move them around. It was very primitive but that was a big formative moment because I knew that all that had happened was I typed a few more characters into a text editor, right? The difference between the creating process and the creation is quite stark when you’re programming [laughs]. You’ve used After Effects?

K: All the time.

D: So when you select a layer, and you see little marks around it, and it highlights it as being selected and you can move it… that’s a thing you take for granted every day but that had to be programmed. The first time that happened, yeah it just felt like “oh yeah there’s a thing! There’s a thing called a layer and I can move it!”

That was the very early Genesis but then once we had enough features in there, I don’t remember exactly how far along it was, but one of our beta testers Randy Cates had done some like, “Saturday Night at the Movies” kind of openings for different TV channels and he put one together in After Effects. It wasn’t for any particular channel, it was just a prototypical test of what you can do, but it was beautiful! It was like a bumper or something, 5 or 10 seconds that would go on before you watch a movie on TV. It was like spinning gears, and shiny things and everything was moving slowly and coming together, and there was text saying “Saturday night at the movies” and it was really the kind of thing you would see on TV. That really drove home that you can really use this tool to make these things that I’ve been seeing during my whole life and didn’t really think about “how does it ever get to the screen?” so that was pretty great.

That was more of a TV type use, Jurassic Park the first film. It was also used in an IMAX film… I think “Search for the Great Sharks” around the same time. Every time there was one of these uses it was great. The first book that was written for After Effects, I think it was Chris and Trish’s book, that was an amazing milestone too. It’s so long ago I don’t have a lot of Milestones recorded but it’s been really inspiring to see. It’s always about the artwork that other people can create with it that I don’t have the ability to create myself.

K: And then you get to take credit when people crush it!

D: Exactly, yep [laughs].

K: Adobe might send the A-Team after me for this question but After Effects has to be one of the most cracked programs of all time. Has that had a positive effect? In the sense that a lot more people end up using it who probably wouldn’t otherwise due to price and then we possibly see more creativity from more places?

D: I myself used to recreationally crack software when I was in elementary school [laughs]. It’s sort of a fun thing to see if you can get around protections.

K: Oh as a programmer I bet. Educational too.

D: It’s just a challenge, but I do think that all software companies get a benefit from piracy where the people pirating the software weren’t going to buy it, but now they’re exposed to it, learn it, and then when they actually get a real job for money, they buy it. That’s really the point where I hope they’re going to pay for it, if they’re actually making money with the software. Really if they’re just learning it, yeah. Adobe has great education discounts, I mean extreme educational discounts now so hopefully, it’s [easier to access legally]. That might not be the Adobe official line but I don’t know.|

K: Oh yeah I mean the thing that finally had me able to buy the CS6 Suite was a collegiate discount combo of some kind. Got some version it for like $200 when Photoshop alone was $700. Now it’s all what like $20/month? [Note: Well, kinda. Apparently the Creative Cloud “All Apps plan” for a student is US$19.99/mo the first year, and US$29.99/mo after that — regularly US$52.99/mo]

D: Something like that. Hopefully that’s a better way to get the access legally. Going to subscription was a switch that I wasn’t sure was going to work out. I was one of the naysayers like “I don’t really want to rent my software, I want to buy my software!” but since we’ve made that transition (I’ve also made that transition as a consumer)… I mean I no longer buy CDs, I sign up for Spotify and Pandora and I love it.

I think the best thing about it, besides it being more affordable -ya know, you don’t have to hand over a big chunk of cash all at once- is that our interests are better aligned. Like, in the past when we were shipping new versions, our biggest competitor was always, by far, the previous version. It wasn’t any of the other programs it was always the previous versions of our software. So to compete with the previous version we had to add shiny things that would attract attention and get people to upgrade, and a lot of our revenue came from upgrades. The problem with that is that shiny things aren’t always in the best interest of the people who are already using it, even though those are the people that were catering to. We’re trying to get an upgrade where is really a lot of times maybe it’s just like a little something maybe something we did last cycle we should just do faster, fix some more bugs or whatever. There are things that are less exciting from a marketing point of view but really our hardcore customers would rather have us work on, but since we have switched to subscription it now works in our best interest, and their best interest, that we make a great product to us even if it doesn’t mean that thing we’re going to do isn’t going to increase sales directly, because we’re no longer selling upgrades does that make sense?

K: Absolutely. I also feel like the tentpole features probably just come when they’re ready, too, versus waiting for next year’s version. I’d much rather wake up and one day have “render and replace” without much warning than get all hyped on something I probably can’t afford anyway.

D: It’s great. Anything that aligns better with the customers I like, and I think we can take it even further taking more advantage of the subscriptions to make sure that the software is really rock solid. We know people are paying, so rather than rush the software and come out with bugs, we can ship something that’s even more solid then we’ve been doing and so I’m working on an initiative to do that now: higher quality, more time crafting something that’s great, something solid rather than something shiny.

K: I use After Effects a lot for titling and compositing things in Premiere mostly. Do you see a future where AE and Premiere are simplified down to one program?

D: My view on that is that as you simplify things for a broader audience you need fewer specialized tools. You could sort of look at Premiere Rush as a combination of After Effects and Premiere, and I don’t think that there needs to be an After Effects Rush, for example. After Effects Rush and Premiere Rush would be the same thing. Premiere Rush just has those kind of features built-in because they’re simplified, but as you create a pro tool that is very deep the user interface and model of it -the pieces of things that you’re working with, the different models- work better for different things. The Premiere model is just better for editing so obviously there’s overlap with After Effects. You can use After Effects for editing too, but it’s clearly not as good as Premiere. I don’t think there is enough for people that live and breath this stuff every day. Once you want it to be really deep I think you want a special user interface for that task, so for now I don’t see the pro tool merging in that way, but yes simpler versions would basically be Premiere Rush. A lot of these things will filter down, but with fewer options. Simpler interfaces. But, that often means less creative control. If you’re just trying to do it on your phone you probably want it to be very simple.

K: Well to book-end the talk, do you have a favorite project codename? Who even comes up with those?

D: I guess the Wikipedia page would show which one is the first, like, fake word the rather than just an arbitrary food, but like there was like FauxFu for the tofu intolerant [laughs]. I think that came from a Simpsons episode, but the team votes every cycle. I’m no longer directly on the After Effects team. I still sit with them, but my team is not working on tech transfer between research and product, which means After Effects and Premiere. More recently it’s been Adobe Character Animator. so that’s my main focus right now. So for those hybrid words the team comes up with, I think “FauxFu” was the first one. The fake words with some sort of food theme that’s just got some some sort of pun, [FauxFu] was probably my favorite. Goatmeal was also good.

K: Actually that’s a better way to end it, Character Animator is amazing. I know Colbert used it a bunch, the Simpsons did that live thing… is it possible that it’ll work in 3D soon? Is it mostly a live thing or is it going to be used more in Film & Television?

D: Yeah so currently it’s just a 2D app, but because the layers can come from Photoshop, and Photoshop can contain any image, you can have 3D rendered characters. For example, if you look at The Late Show, there was an owl that representing Nigel the Owl, and that was rendered in 3D and then brought in as the Photoshop layers and then brought into Character Animator so Colbert could talk to this 3D rendered doll in real time. All of the 3D part of it was pre-rendered, and Character Animator is driving the automatic lip sync and moving the body and the arms and triggering animations that were created in 3D.

Currently, those pieces have to be 2D before they go to the screen, but to answer your question about it going to film or television it’s really targeting two different audiences. One is people who know how to animate but need to animate faster, including live, so that’s how the Late Show is using it, The Simpsons… on Showtime there’s Our Cartoon President. They’re not doing it live, but that’s the character that spun off from the Late Show and created a whole new show, and they’re doing it in about half the time that a regular animated series is produced. I think about half the people as well. It’s all done in New York too [instead of being sent overseas] because they can afford to do it in-house whereas in the past it wasn’t possible.

Two, it’s for people who don’t know how to do character animation but they do know how to illustrate. We even have the Characterize feature which allows someone who doesn’t know how to illustrate to create an animated puppet of themselves that is stylized. We are pushing it in all those directions, helping the pros do it faster and live. Now anytime you have a cartoon character, say, they now have an ability to show up on a talk show for promotional purposes. I would like it to be a completely normal thing where any animated movie comes out, that character can go on the late-night circuit and they just use character animator as a way to deliver that character talking to audiences live. Similarly, ya know, 6th grade classes who are doing class projects want to do a video. They can do their video with a character that they make to illustrate their point, and all the things in between. By taking animation and democratizing it, we don’t really know where it’s going to go. The hope is that it’s going to go places that we haven’t even thought of and that’s actually already happened with with Twitch, which I didn’t even know anything about when we started the project, but now people are taking their their character animator avatars and while game streaming on Twitch. Rather than stream a video of their face in that corner of the game stream,they stream a little avatar driven by character animator and they can trigger different animations. There’s this streamer Scribbleh and he’s triggering his animations with foot pedals. So he’s triggering the Character Animator avatar with foot pedals while he’s playing the game with his hands and Character Animator is tracking his face. He’s also reading the chat and responding to those people and playing the game so it’s that’s just one person who does this for hours every day.

K: That’s actually super smart! I hadn’t thought about that, that’d be a way better way to do it. Stay somewhat anonymous but still play “the character”. You probably have even more creative freedom that way.

D: Yeah it also gives kids a way to do video blogs without, say, revealing their identity if their parents don’t want them to be public on the web so it gives you away to hide — it gives you that anonymity at least of your face.

K: Well thanks for your time again David and congrats on the win, well deserved.

D: Thanks!

Categories
Featured Production

Out in the Field with the GoPro HERO7 Black

Recently, I was sent a GoPro HERO7 Black to take out with me on a two-week trip to Colorado where I was filming for a ski tour company called Lifestylez/Echo Tours. I’ve done this trip a number of times and it always requires an action cam as I can’t bring my big C100 rig certain places (on the mountain, for one). Having my only B-Camera in the HERO7, here’s what I found out.

Right off the bat, I have to say that my current action cam is a Sony X3000, which I chose for its Optical Image Stabilization, Low Light sensitivity, and (to my eye at the time) a better image than the Hero5. It also has a ¼”x20 hole on the bottom which I liked as there’s plenty of rigging and accessories for that. In any case, the last GoPro I had was the 4, so this was a big step up and my expectations were high.

On the surface, the HERO7 Black is just a HERO6 Black with (good) stabilization, a new (better) menu system, and some new frame rate options. Actually, that’s also true under the hood, with the addition of better audio than previous models as well. In any case, I wanted to test the camera out in various shooting conditions, which included Snowboarding, Concert B-Roll, and “About the Town”, all for use in the various edits that would come of the trip.

First up to test was the Hypersmooth, obviously. They were claiming OIS quality, which gave my eyebrow a hefty raise. After that, I wanted to see which formats looked best, how the battery would do in different environments, and how well it held up in the dark.

Well, hypersmooth is the truth. They’re not joking when they say it’s gimbal-like. Gimbal-esque may be a better description. If you hit a bump the camera’s going to shake, but for being handheld you could sure fool some folks. At no point did I miss the Optical Image Stabilization from my Sony, which was huge because otherwise I was going to have a hard time using any of the footage in my edits. Even when Hypersmooth gives up on the image and lets you jerk it whatever direction you’re (probably) falling, it’s in no way “electronic” looking like other EIS options. It just looks like you moved the camera aggressively. In actuality, slow pans are where Hypersmooth kind of fails you: it can hold on to the scene too long and result in a noticeable jerk in the middle of the pan where it realizes we’re moving. So it goes.

A trick I learned to get some energetic footage at concerts was to toss the camera to the front row and let them film themselves. They ask me to “take a picture of them” all the time anyway. In those situations, you absolutely need some form of robust stabilization or you’re not getting anything but blur. The HERO7 handled it quite well, with no help from the folks excited to hold it for a bit. Overall, something like the Karma Grip (which also powers the camera) is still worth looking into if you need rock-steady footage, but I was honestly surprised at how good Hypersmooth is. Real value-added there.

In regards to that footage, you can squeeze quite a bit on the SD card which I appreciated. On a modest 32GB card you can get a full hour of 4K 60fps. Not bad, considering I’d only be filming for a couple hours at a time and definitely wouldn’t be running it for the length of any given event.

The battery is another thing. GoPro doesn’t have a history of great battery life in their cameras, and it’s not amazing here, but I will say with judicious use you can get away with only one battery per “event” if needed. As I only had one, it was. For the concerts, using it for b-roll, I found the battery held up fine, just about dying by the end of the two-hours I spent shooting. We were inside, it was warm, and while I wasn’t running it constantly I would leave it on a lot. If I was to film the entire thing in one shot I’d likely plug an external battery in like you’d use for a cell phone. I would like some way to register the cable securely into the USB-C port, so it doesn’t accidentally get jostled out, but I suppose that’s a job for the 3D printer.

Out in the extreme cold, it’s just not happening. At a certain point, the camera will just refuse to turn on at all. On warm bluebird days, turning it on and off constantly, we were getting about 15 minutes of footage (not runtime, footage), but on the blizzard days we couldn’t get the thing to run for longer than 15 seconds before shutting down, and then refusing to wake up until it was warmer. Not necessarily surprising, seeing as we just had it in the basic skeleton case and not in a plastic “supersuit” case or something like that, but that’s how it shook out. To be fair, blizzard footage isn’t that great to begin with.

Now the big question is, how does the footage look? Well, if you have a HERO6 Black you know, as the 6 and 7 share the same GP1 sensor, but since I don’t I was quite impressed. I did note that there’s a certain way you have to set up the camera to get the image that I was impressed with, but that’s not hard is it? I’ve listed my recommendations further down.


In low-light, like at the concerts or walking around at night, you’re not going to be in much luck. At high ISOs the footage becomes incredibly splotchy and loses a lot of detail, but this isn’t surprising. In the case of the concerts, the extreme shifts in both lighting intensity and color essentially made the footage unusable; at high ISOs the sensor just couldn’t shake it and would blow out very quickly. Low-light performance has become one of the de-facto requirements of professional cameras these days, and while the GoPro could be considered somewhat “prosumer”, it’s still an action cam at heart so we can’t rightfully judge it against higher-tier cameras. 800 is probably the highest you’d want to risk, with 400 being the safety zone and 100 being your “base”.

That being said, when properly exposed the image that you can get from the HERO7 Black is not only very high quality but grades nicely as well. The daylight-lit snowboarding footage I got in 4K cut together flawlessly with the C100 footage (as noted below, 4K/HEVC/Flat Color were the best settings). In other words, if you’re going to be using a GoPro for your production work, make sure the scene is lit. As you should with any shoot.

So, after this and further testing, what settings did I learn were best?

HVEC (h265)
// I had to start with this because it’s kind of important and not necessarily focused on anywhere. With h264 you’ll max out at 60mbps, whereas with HVEC you can get all the way up to 78 in certain scenarios, and as HVEC is a more robust compression algorithm, each mb/s goes a lot further than with h264. Visually there is a difference as well: h264 looks more contrasty/saturated than HVEC, and in most cases less contrast is preferable. As HVEC is a newer format, certain computers might have a hard time playing it back, or may be unable to play it back natively at all. In other words, if you shoot high speed or 4K 60fps you’re using HVEC regardless, but if you’ve got an older computer or it’s under-powered, stick to h264. Here’s a chart of some likely resolutions/framerates you may use, with their bitrates listed with ProTune on and off, as well as in h264 or HVEC (h265).

MODE FPS h265** h264**
4K 60 60/78 NA
4K 24 45/50 60/60
2.7K 120* 60/78 NA
2.7K 60 45/60 60/60
2.7K 24 45/60 45/60
1080 240* 60/78 NA
1080 120 45/60 60/60
1080 60 45/60 60/60
1080 24 24/36 30/45


(*) No Stabilization
(**) Bitrate with ProTune Off/On in mbps


ProTune On
// Yes. Always. ProTune will make sure your footage is recorded at correct settings with the highest bitrate available for your format. In some cases that’s the difference between 45mbps and 60mbps! As far as sensitivity, I’d say go with ISO 100 being the Minimum and Maximum, with 400 being your max in darker situations. You can reach past 1600 into 3200 but it’s going to look very, very blotchy even at 24fps 1/48. I’d also potentially advocate for a -0.5 EV Comp to protect the highlights as they blow quickly. If you’re in dynamic lighting situations, go ahead and lock the shutter speed so the GoPro isn’t constantly trying to equalize. Move the sharpness to low and do that yourself in post. Also, be sure to set your White Balance to the appropriate setting for the scene every time! If you’re not sure, or it’s going to change a lot, set it to 4000K or 4500K. Flat color is recommended.

4K/60/16:9
// Aside from being the highest resolution, it’s also encoded at the highest bitrate when ProTune is turned on. You’ll get the best bitrate at 16:9 60fps or 4:3 30/24fps, but if you go with h264 or any other frame rate you’re getting 60mbps anyway. In that case, go ahead and shoot whatever framerate you’d like, but again I’d recommend shooting flat and forcing HVEC recording if your computer can jive with it, as it’ll give you the nicest and most gradable image. You’ll have to shoot Wide FOV at 4K, but you can correct this in post easily (about 0.408 in DaVinci’s Distortion Correction OFX, and around -23 in Premiere’s Lens Distortion effect, as an aside). If you need Linear FOV, head to 2.7K.

Another recommendation would be to get some ND filters. You can either get a cage with a threaded filter front (like a 52mm or so usually), or you can simply remove the replaceable front-glass and put on a purpose-built ND filter in its place instead, keeping everything nice and small. As we can’t control the f-stop of the camera, the ND filters allow us to keep the shutter speed low. Essentially, this results in a more “cinematic” quality of motion, as opposed to a choppier “action” type of look.

From there you should be good to go and your footage will be looking its best, especially after a nice color grade. You can see example footage from the various scenarios mentioned in this article in the video embedded below.

Overall, I’d say GoPro has finally made the camera it knew it could make with the HERO7 Black. As I mentioned, the low-light and battery life could use some work, but it’s finally a camera I don’t really have any complaints about, and they’ve even added some unexpected features that I like (such as the removable glass and power by an external source). I obviously haven’t used every action camera out there, but right now I feel like you’d be hard pressed to find a better one than the HERO7 Black.

 

 
 
[magento-connection sku=”96734″]