Efficient footage filenames when using an external recorder

If you haven’t noticed, most of the tidbits I put on this here blawg are born-ed of my trial & error, my woeful miseries, and my occasional tiny logistical victories over the ruthless gremlins of Murphy’s Law. Wait, that makes it sound like this article is going to be a lot more exciting than it actually will be — you’re all “dayumm this gonna have gremlins an’ shit in it, like some Ghostbusters 3 shii ri here.”

Nope, boring. Just filename suggestions for recording footage to an external recorder like a Sound Devices PIX 240i.

Recently I was DP on a feature, shot on a Sony F3 with one. I assume other external recorders allow for flexible filenaming with automated take number advancement, just like the PIX 240i does — which is great. Because of this, I recommended to the producers that we shoot without slating. It can save significant time on a production, especially one like ours where we had a language barrier, chaotic locations, and a tight schedule… along with all the other factors of a low budget production.

So here’s the formula I recommend for the filename structure:
(project acronym)_(scene number)_(date)__(take number)

If your film’s called “The Buried Dirtball“, you’re shooting scene #26 on July 10th, and it’s the fifth take, then itsa gunna looka lika thissa:
bd_s026_07.10__005.mov

Though there’s one caveat… I don’t have a 240i in front of me, so I’m not 100% certain you can get all that stuff into the filename. But as I recall, you can manipulate the naming pretty heavily.

Including the date may seem cumbersome and maybe even unnecessary at first, but to me it is now essential. We didn’t use it when shooting the aforementioned feature, and that’s exactly how I learned that we should have. You may end up shooting shots from a particular scene many days apart from one another, so trying to figure out what the last numbered take was for that scene can be time consuming, if even possible at all when on location. But you probably can remember if it was earlier that day. So anyways, this can help prevent you from having shots with the same filename, which can be disastrous if one file happens to overwrite the other when placed into the same folder.

Note the three digit numeral for the scene & take numbers. That’s so if you put all your .mov files into the same folder, they’ll stack in proper order, making it easy for you to find stuffz.

Also, don’t use spaces instead of underscores… yes, I know it’s not 1997 anymore, but if for some reason the spaces cause some kind of problem for like your sound designer or colorist, then you’ll get to send a text message to yourself with nothing but a saddyface emoticon in it. Also, I’ll get to laugh loudly in your face. And I’ll probably make sure I eat something stinky immediately before. Like dog feces. That’ll teach you a lesson. Haha, in your face, bro. Jaykay, that’s just simulated schadenfreude– I would probably send you an upbeat, encouraging emoji to make you feel better, like that iPhone one of the twins in cat suits dancing.

The extra underscore before the take number is just there to visually scan better.

Also, you can use s000 (scene #0) for random unassigned stuff like 2nd unit exteriors, etc. I prefer this over other methods since those shots will all be easy to find in one place.

Yeah, so anyways, based on my experiences on productions big and small, this format should work as a catch-all, with minimal time, energy, or headache during shooting and post. You can add more stuff to the filenames I guess, but going into the recorder’s interface to add shot numbers can be a slow and confusing hassle, especially if you’re not a fully crewed production with a dedicated script supervisor and 2nd AC. I would recommend the sorting of footage by shot be done in editing software, where it’s pretty simple & painless.

Using Super 16 lenses on the Blackmagic Cinema Camera: It works for a 1080 HD crop

There are Super 16 lenses that have been sitting around collecting dust for the last 5-10 years, and you can usually scoop some up at bargain prices. That’s because, aside from a minority of Red users, no one’s been using them, due to HD sucking all the wind from its sails like some sort of gigantic cloud vampire. No, I’m not drunk. Yeah so anyways, that all might change now that the Blackmagic Cinema Camera is starting to ship in quantity.

SUPER 16 ZOOM LENSES
The BMCC sensor size is a good chunk bigger than Super 16 (abbreviated “s16″ from now on… I’m too lazy to type it out the bazillion more times I’ll need to reference it). But it’s close enough that some s16 zoom lenses will cover the sensor at the higher end of their focal length ranges, probably half to three-fourths of its zoom range, depending on the particular lens. That’s because the image circle of a zoom must be big enough to fill a format’s picture area when at its shortest focal length, ie. zoomed all the way out. And then by principle of a zoom lens, that image circle will get larger as you zoom in. As that image circle gets larger, it can hence fill a larger picture area/sensor. Here is a diagram featuring completely random and arbitrary imagery:


But here’s one thing to keep in mind: some still photography zoom lenses maintain their small image circle due to their particular design, so the same may apply to some s16 zooms as well.

    ***A LIL’ UPDATE***
    Cinematographer John Brawley has posted some frames from his tests of the following s16 lenses…
    Angenieux 11.5-138mm T2.3
    Canon 6.6-66mm T2.7
    Canon 8-64 T2.4

    The results don’t look too great for full BMCC sensor coverage. The Angenieux maintains a small image circle, and the Canons show noticeable chromatic aberration outside the s16 picture area, even when they cover the sensor. You can see all the DNGs via his dropbox link in this Blackmagic Forum thread.

    A few paragraphs down, I’ve added examples of how these lenses can work for the BMCC via a 1080 center extract.
    ***

SUPER 16 PRIME LENSES
I have no idea if they’ll fill the BMCC sensor. Do long focal length primes have larger image circles than their short focal length counterparts? Maybe. Image circle sizes vary from manufacturer to manufacturer, and product line to product line. So the only way to know is to test that specific lens.

HOW TO USE ANY AND ALL SUPER 16 LENSES ON THE BMCC
This might be a big deal for some people. And it’s so simple: just crop into the 1920 x 1080 center pixels of the 2432 x 1366 frame of BMCC footage. Actually, you’ll only have to do that for footage that needs it. Like I mentioned earlier, some lenses will cover the whole sensor. But as a boilerplate, unilateral policy, it’ll work for all s16 lenses, regardless of focal length. Here’s why…


You can click here or on the picture to see a full res version of this BMCC 2432 x 1366 frame diagram. Notice how the Super 16mm picture area is just big enough to cover the extracted 1920 x 1080 HD frame from the full BMCC frame? Boom, there ya go. So it’s all good with using s16 lenses.

    ***UPDATE***
    Here are some examples of how the 1080 center extract would work with the three aforementioned s16 lenses…



    You can click on them to see ‘em as full res 10% quality jpegs… which means they’re just for examining the image circle, and not the image quality of the camera. You can get to the original DNGs via this Blackmagic Forum thread. Mucho thanks to John Brawley for letting me use these frames.
    ***

I haven’t tested this myself because I’ve yet to get my filthy hands on a BMCC, but numbers don’t lie. There are a few things to keep in mind if you choose to do a 1080 center extract:

  1. You’ll have to shoot in 2.5K RAW mode. Obviously the downsampled 1920 x 1080 ProRes or DNxHD modes won’t work for this.
  2. You’ll need to mark the 1920 x 1080 center on your viewfinder. This is actually not that big of a deal. You just get some clear touch screen protective cover, and then make your markings via trial and error… ie. set up a tripod & chart/whatever, import footage & perform the extract, then mark the cover. I do this exact process for certain kinds of VFX shots all the time, it’s easy.
  3. You’ll have to do the 1080 center extract in post, but it’s hecka easy. Do I even really need to explain this? I will, just in case. In whatever software, make your timeline/composition/whatever 1920 x 1080. Set your BMCC footage to be at 100% scale/size, that way it’s only showing the center. If a particular shot doesn’t need the 1080 extract, then change its scale/size to 79%.
  4. It’s obviously not going to look as good as the full sensor 2400 x 1350 image that’s been scaled down to 1920 x 1080, because of Bayer filtering mumbo jumbo that you can google. Whether or not the optical resolution is good enough is up to you… but if your basis of comparison is a DSLR/MILC, then it likely will. From the samples I’ve seen, a 1080 center extraction looks pretty great. And if you do see de-Bayer artifacts, try extracting from a slightly larger area than 1920 x 1080 if your lens’ image circle allows for it.


If you’re thinking it’s just weird and strange to capture image areas that are going to be discarded or unseen by the viewing audience, just keep in mind that it’s standard procedure with most non-anamorphic lensed 35mm shooting formats. Also, cropping in to 1080 on the BMCC is very similar, in principle, to the crop resolution modes of the Red cameras. Plus, really, your work is 99% likely to ultimately be seen in 1080 HD anyway. Is the 1080 center extraction worth all the extra work? Probably only if you just really wanna shoot on a fast cine zoom that’s affordable.

Also, if you’re content with shooting for a 720 HD finish, you can use standard 16mm format lenses… by cropping to 1578 x 888 and then downscaling to 1280 x 720.


The simplest way to add production value to your film: Avoid white walls & wardrobe

To me there’s four explicit things that differentiate professional films from their student/amateur counterparts (other than marketing budgets, ha)…

  1. Sophisticated lighting
  2. Extensive foley
  3. A lack of white-walled interiors
  4. A lack of white wardrobe

Numbers 1 & 2 usually cost some mucho money. But 3 & 4 don’t, so you should totally jump all up on those for your projects.

Go to the Apple trailers page and watch a bunch of ‘em. Count how many times you see white-walled interiors versus color ones. Usually the more fantastic & stylish the film, the darker the color of the walls… and even with “realistic” films, the walls that “feel” like they’re white are actually light gray or beige or light blue. That’s because professionals know to avoid putting white on-camera, because it limits your lighting options since you’d have to walk on proverbial eggshells in order to not have it blow out overexposed. And that’s working with film and its 13-14 stops of dynamic range, as opposed to your (assumed) DSLR/MILC/video camera’s 7-10 usable stops. Which means you should be extra concerned about avoiding white walls.

So you can either choose locations with that in mind, or you can score extra go-getter points by painting your location’s walls. House paint is pretty cheap, especially considering the visual difference it makes for your film. If just hearing the concept of spending a day laboring away to make your bedroom’s walls mauve or pistachio sounds like absolute insanity to you, then –REALTALK– you may not be cut out to be a filmmaker. No-budget filmmaking is 60% moving heavy objects. Wait, unless you have a big trust fund… then just hire someone else to do it and soldier on.

Yeah so anyways, get those white walls out of your film and it’ll not only look better, it’ll convey the mood/tone better (omg the scary room’s walls are bLoOoOood rEd), and will give you more lighting options without fear of blowing out the walls to yucky clipped white. The lighting thing is especially true if you have limited light modifying gear, which is probably your case. Unless the aforementioned trust fund thing applies.

And needless to say, that stuff applies, like, tenfold to white wardrobe. Because limiting how contrasty or varied you can light your actors is like shooting yourself in the foot. This is something that I’ve seen occur in a lot of student and amateur films. What’s most hand-to-forehead inducing about it is how easily avoidable it is. So don’t put your actors in white clothes in like direct sunlight, unless they’re supposed to be visiting from the afterlife. As with walls, “white” wardrobe in professional films are usually actually a little darker. Often a preexisting piece of white wardrobe will be lightly dyed, though I have no direct experience in that, so I can’t offer any advice or suggestions on doing it yourself.

Checkity these details from the last of the above framegrabs. Note that the towel on Sally Field’s shoulder is blown out white, with the same RGB values as the blown out practical light units.

That means a white dish towel will blow out on a $230 million superhero studio film shot on Red Epic… so a white dress shirt will likely do the same on your project shot on a DSLR. I assume that since it was a minor prop, they didn’t bother to dye it, or maybe they didn’t realize it was going to sit on her shoulder or whatevs. But there’s no way they’d have a piece of wardrobe doing that.

Also worth noting is that some folks feel the same way about using pure black wardrobe, which can appear on-camera as pure formless black in some lighting situations. Though this is less perceptually disrupting than a blown-out white shirt, in my opinion.

I’ve shot projects where white walls and shirts were beyond my decision-making control, and I had to work around them. All of those projects would’ve turned out better-looking had the white been replaced with something else.


The ultimate no-budget filmmaker lenses for mirrorless cameras: Nikon IX rises from the grave

Listen up, I feel like this is kinda a big deal. So you’re dirt poor. Or a student. Or a dirt poor student. Or a dirt student (agriculture major). And you wanna put an earnest effort into seeing if you can cut it as a filmmaker. So you buy a GH2. Good choice. And you go the ultra-frugal route and get only the body without the 14-42mm kit lens. In my opinion, it’s totally worth the extra $100– but you’re scraping… and maybe you’re more into longer focal lengths anyway.

So here’s what ya do about lenses: get on the ol’ eBayz and search for “nikon ix”.

These lil’ dudes are dead lenses. We’re talkin’ in a sealed coffin dressed in their mom’s favorite outfits. They deaddddd. They were made for Nikon’s Pronea cameras, which were part of APS (Advanced Photo System), a failed consumer photo film format thingee from the 90s. All I really remember about APS was that I think they had Bill Cosby in their commercials. Yeah so anyways, these are lenses made for a now-extinct film format, and they can’t work on present Nikon or Canon DSLRs because they have rear protrusions that would hit the reflex mirrors.

Well now we gunn git our George Romero on, bcuz these now be ZOMBIE lenses. Their putrifying little lens hands are reachin’ up out of the dirt in front of their little tombstones. Except instead of wanting to eat our brains, they wanna help you get yr filmmakin’ on, deep discount style– because they can work great on a GH2 or other Micro Four Thirds cameras, and likely on NEX/E-mount and NX cameras as well.

Once you apply all my advicey thangs listed later in this article, you’ll find yourself with some nice video-friendly lenses that are lightweight with good optics, with a useable focus throw, hard stop focus ring, and a clickless aperture… at an incredibly affordable price. You can have two filmmaking-ready zoom lenses that span from 20mm to 180mm for as little as $60.

They may not be Nikon’s finest lenses, but they’re still Nikkors made with mid/late 1990s lens technology. I really don’t think you’ll find this kind of quality at these price points. Here’s your shopping list…

>> Continue to the full article

Planning for reflections for highly efficient VFX

So this is about using additional “reflection takes” from a shoot to greatly aid in a composite.

Here’s a scenario: you shoot an actor on greenscreen that, in the final composite, will be walking past a wall made of brushed aluminum, or polished granite, or even dark plastic. You should shoot an additional take with a mirrored panel reflecting the actor mimicking the “real” take, but with blocking altered to ensure the actor never overlaps the reflection (since this will give much more leeway for manipulation in post). Then it’s relatively quick and painless to composite the reflection onto the wall or whatever, better melting the elements together visually while also giving an organic look to CGI objects without spending eleventeen thousand hours on shaders and rendering in 3D software.

Here’s some more specific guidelines…

  1. Plan for objects that have less than perfectly crisp reflections. This is because your “reflection take” will likely not perfectly match the real take of the actor’s performance… not to mention that your reflection panel probably wasn’t a high quality mirror to begin with.
  2. Shoot the actor’s performance on greenscreen like normal, with no reflection panel (thus allowing for a simple composite without having to roto out the panel).
  3. Then set up a mirror or shiny board where the reflective object will be in the composite. Reblock the actor doing their performance so that you get an unobscured shot of the reflection. This will likely require some spatial cheating, but whatevs. It’ll help when compositing if the background in the reflection is either greenscreen or black (duh), but this usually isn’t super essential. I’d suggest having the actor watch the “good” take a few times so they can study & mimic their movements and timing.
  4. In composite, finalize the actor’s position and timing into the shot. Then throw in the reflection take and tweak it to match the reflective properties of the particular surface. You’ll likely be using “Add” or “Overlay” composite modes. Btw, you should be working in linear blend mode whenever you’re doing photoreal-ish compositing or using motion blur (if using After Effects via the “Blend Colors Using 1.0 Gamma” checkbox under File > Project Settings).
  5. Geometric mega-accuracy of the reflection’s placement and angle aren’t nearly as important in selling the shot as having the kinetic properties of the reflection match the actor… meaning that the timing of an arm swing or head turn need to be pretty close, so bust out the retiming/speed change plugin/tool if it doesn’t.


Back in ol’ 2007, this technique really saved my skin when a project I was directing/DP/compositing became logistically stressed with revisions when the signoff process grew more complicated than anyone expected. Nobody’s fault, just the nature of the beast. To put it lightly, we found ourselves in a super crunch, both in terms of time and budget. If any students are reading this, note that this kinda stuff happens all the time and probably the primary difference between a professional and amateur is that the pro is able to not bother dwelling on obstacles and just focus on figuring out a way to deal with it as best as possible within the limitations, without ripping their own hair out and going insano. Then, theoretically, you eventually matriculate to projects with big juicy budgets and long healthy schedules that are super fun with lots of creative latitude. I’ll let you know when I ever hear of one of those. The clients who give those out must hang out with Santa Claus and ride unicorns or something.

Anyways, we had to reassess a sequence that contained a CGI refrigerator. It originally was going to animate into existence in a fancy way that can only be done in 3D software, and the shot was to be 3D matchmoved. But due to aforementioned schedule/budget factors, we were hard-pressed just to get the shot done with the fridge static, un-animated. Because of the “reflection take” I was able to use just a still image render of the fridge and make the shot look “good enough”. This saved us several hours, since it nixed having to matchmove as well as eliminate the render time for the fridge. Here’s a lil’ video clip showing a breakdown where you can see how big of a difference the reflection take makes…

Is it optically accurate? Not at all. But it “feels” right to the average viewer. Is it invisible realism? Nope. But it sells the shot with a minimum amount of time and resources.


Using a car interior as an audio booth

If you need to record clean ADR or voiceover, etc, and don’t have access to an audio booth, try recording inside a car (that’s not running, duh)… I guess they’re actually designed to have great acoustics. The only problem is that the actor can’t stand up, which often makes a big difference in delivery, especially if it’s an untrained actor… so maybe just always use really really short actors, I dunno. I can’t remember where I got this tip from… either a book, website or an audio person.

As I’m writing this, I remember a related experience from before I had heard of that. There’s a scene in Atomic Nuclear Prom 3000 with two actors exchanging dialogue in the back seat of a parked car, which I shot from the front seat with a Sony VX1000.

As we were setting up mics, I was astonished to realize that the audio from the camera’s built-in stereo mic actually sounded better… which is the first and only time that has ever happened. But I guess it makes sense– that sedan had nice acoustics and we were in the middle of a quiet wooded area. Also, they were moving around a lot and keeping a directional mic properly aimed at them within such an enclosed space would’ve been difficult.

You can also get one of these relatively inexpensive thingees… I have one and it works great, especially for the price.


Color-saturated fill light for a better composite

Here’s a useful technique that I came up with for shooting certain kinds of greenscreen stuff…

  1. Make the fill light a saturated color (that’s different from the screen and the subject’s clothes etc, duh).
  2. In post, isolate that particular color and then tweak it to match the hue of whatever the natural bounce light would be in whatever environment you’re compositing them into.
  3. Though keep in mind that it’s always better to just use a colored fill light that matches your particular background plate. This technique is more for cases where you have to use one setup for several varied backgrounds.

For example, you can use a saturated magenta for the fill, and then use the secondary color correction in Color Finesse to tweak just the saturated magenta to appear bluish if they’re supposed to be standing on the belly of a giant, naked sleeping smurf or whatever. Another great thing about this technique is that you not only can control the hue and saturation of the fill color, but also the luminance of the fill, and quite easily at that. Of course, this technique will only be useful for certain shots, depending on the lighting and blocking.

Here’s a straight forward example of this technique from a shoot I was DP on. Because of last minute schedule changes, I had to shoot numerous bits of Snoop Dogg in just one setup, even though they took place in a multitude of settings. We could’ve shot him in neutral color and depended on conventional color correction to help color him into the background, but that usually looks pretty rookie & unsavory in my opinion, since any color adjustments to affect the “bounce/ambient” light will also affect the entire subject’s color, even the areas being illuminated by the key light. So with a magenta fill, we had some tweakable “ambient” light to match the backgrounds in post.

You can see from the original footage on the right that not only was I able to easily change the hue & saturation of the magenta fill, but also the luminance. More examples from the exact same lighting setup below…


 
Also, note that in these examples (which I composited & color graded myself) the tweaking of the magenta fill was done with just the “Hue and Saturation” effect in After Effects due to what was available on that particular workstation. Theoretically, Color Finesse’s secondary color correction would’ve given even better results. Please send me any examples if you use this technique for a shot of someone standing on Smurfette’s bare midriff. Or whatever else.


The key to good handheld: The horizon line… particularly with CMOS sensors

Okay first, just to get some quick terminology clarified, there are three axes to change the orientation of your camera framing:

  1. Pan (turning to look left and right)
  2. Tilt (looking up and down) *As a side note, some people use the word “pan” to describe a tilt. Don’t do that, it makes you sound like a rookie cookie. Or maybe just a producer, ha.
  3. Roll (making the horizon line rotate, as if laying your head on your own shoulder)

Pan & tilt are no problemo as part of anyone’s camera dynamics repertoire. But there is a huge caveat with roll. That’s because when us humans experience this in real life, our inner ear’s vestibular system does a primo job of conveying to our brains that it’s our head’s orientation that’s changing and not the planet earth’s.

So when a camera uses roll, that offset reaction in our vestibular system isn’t occurring, and in its absence our brain’s initial primordial reaction is that the onscreen world is changing in orientation… which is usually not what the film is intending to convey. Except when you’re faking an earthquake or the 1960s USS Enterprise being hit by a Klingon laser beam. I jest, but those are actually perfect examples of this phenomena– it’s the lack of vestibular system reaction that allows those illusions to occur. Who like totally loves that I used the word “jest”? We totally got some Shakespeare In The Park On The Internet On A Blog goin’ on.


Attention Star Trek nerds: that’s a 3 degree roll of turbulence inflicted on the ship, from the episode “The Corbomite Manuever”… ya know in case you wanna like run calculations on how that would’ve affected the warp drive’s dilithium crystals or whatever. In my defense, I had to wikipedia to find a nice trekkie vocabulary word like “dilithium crystal”. I MEAN IT I’M TELLING THE TRUTH.

So anyway, if you’re trying to give a handheld POV chase shot an energetic, actiony feel, then put in a lot of frenetic pan and tilt, but try to minimize the roll. In a nutshell, the mantra of handheld should be “keep the horizon line straight”.

Though there may be times that you want to induce disorientation, confusion and even nausea in your audience, and that is when you do use roll.

And there’s also an additional factor with roll when shooting with CMOS sensors. Good handheld camera craftsmanship in general is of über-importance due to the sensors’ rolling shutter. To avoid “jello-cam” you have to be firstmost concerned with fast, jittery panning… but lots of fast back & forth roll can also give your footage that jibble jabble gloopy glopple jiggle, as pictured in the framegrabs below:



So here’s a simple solution for minimizing roll: increase the radius of your support points from the camera, ie. maximize the distance between your hands. That way, when your left hand shakes half an inch, it’ll just create a 2 degree change in roll… as opposed to if it was one third the distance, the roll would be about 6 degrees. Here are a couple ways to apply this to your handheld shots:

  1. If you’re using a rail system, put your handles as far apart as logistically possible.

  2. You can use a cheap, light tripod with its legs together or a monopod and have one hand at the baseplate and the other as lower down as possible.

For a little more help, you can also use a bubble level on your hot shoe and adjust accordingly during the shot.


Making a crummy “emergency fader/variable ND filter” from RealD glasses

Okay, this is really only useful for some kind of weird desperate situation where you’re willing to sacrifice image quality for shallow depth of field. Or maybe you’re stranded on a desert island with your camera, but without your fader/variable ND filter. Annnnnnd you’re making a short film instead of trying to survive and/or be rescued. Annnnnnd the tropical foliage in the background is really visually distracting from your lead actress (a volleyball with a face painted on it?), and you’d really love to shoot at f/2.8 instead of f/22 (and not resort to using a crazy fast shutter speed– maybe it’s a romcom, not a WWII action drama). Annnnnd you just happen to have pocketed a pair of those RealD glasses from when you wasted $15 on Clash of the Titans 3D. This is the longest pseudo-comedic preface ever. But it’s my way of saying “Yeah, I know this is actually more just an anecdotal Mr. Wizard’s World / MacGyver thingee rather than something you’ll ever actually need to use”.

Yeah so anyway, do this:

  1. Pop out the flimsy “lenses”, actually filters we’ll call them.
  2. Flip over the right filter.
  3. Put the right filter in front of the left filter.
  4. Rotate it 90 degrees.
  5. Enjoy all the loss of optical sharpness, chromatic aberrations, and color shifts those cheap quality polarizers are sure to cause.


That is all. Also, I’m totally dying to know if the volleyball face lady falls in love and lives happily ever after or not.


Animating off the (temporal) grid

Say you’re using After Effects to animate five balls flying into a wall and bouncing off. Normally, you’d have to keyframe the instance of impact on an exact frame, before changing the direction of the ball’s position path to bounce off… which wouldn’t really have any kind of qualitative disadvantage if it was just one ball. But when there’s five of them, and all of them end up having the moment of impact occur exactly on a frame, it might feel “not chaotic & organic enough”. Or just plain unrealistic if it’s a visual effects shot… if you were to shoot video of five baseballs being thrown against a brick wall, only maybe one or two would have the concise moment of impact occur exactly during a video frame. For most of them, there’d be a slur of motion blur denoting that the impact was occurring either slightly before or after an exact frame. If the balls were composited in and all had the moment of impact synch exactly to frames, then it can kinda read subconsciously as synthetic to the viewer.

This principle – that action does not naturally unfold synchronized perfectly to the occurrence of a particular framerate – can easily be applied to animation, if your project has need for such a thing. I totally just scored major snob-dawg points with that last sentence.

  1. For sake of example, say you’re using After Effects. Put your animated layers in a precomp that has triple the framerate as your main comp… ie. if your main comp is 29.97 fps, then make your precomp 89.91 fps.
  2. Animate the layers moving/colliding/bouncing/etc in the tripled (89.91) framerate precomp.
  3. In your main comp, you should see that the moments of change/impact/etc are sometimes occurring at a moment in time that’s between discreet frames.
  4. Moving any keyframes in the precomp by one or two frames allows you to put those moments of impact on or off “the temporal grid” of the main comp.
  5. Be sure to use the phrase “temporal grid” around your coworkers to look pretentious. Err, I mean classy. Classy and sophisticated. You know, like how James Bond is.

Of course, if needed you can up the framerate of the precomp to whatever if you need more chaos/realism in the timing array of impacts.

I first came up with this when working on the project pictured below, needing smaller units of time to distribute the keyframed squares that comprise the flame movement. Since then I’ve used it on VFX-ish compositing shots for the reasons I noted above.