Never second-guess again. The new Creator License covers personal projects online and on social media. See details.

5 Shopping Cart
Your cart has been updated
checkout
Categories

Cover image via

How Technology Will Kill the Focus Puller

Julian Mitchell
Published: Last Updated:

Even though professionals have largely shunned it, for the rest of us, video autofocus is in a golden age of innovation.

One of the more esoteric careers in movie making is the focus puller. These guys have it tough, regularly having to pinpoint the most minute degree of focus fall-off at high frame rates on large format sensors and at the same time using their focus to follow the DOP’s narrative design.

As resolutions increase and formats like IMAX rise in popularity, the pressure on focus pullers will only increase.

But, most people don’t fully appreciate what the focus puller does, only paying attention when the focus isn’t met. Nobody wants an out-of-focus shot. In a worst-case scenario, this can lead to losing jobs.

While the cinematographer is often praised for high-octane, hand-held action work, we should be equally applauding the focus puller.


Evolving the Focus Puller

There have been attempts to replace or supplement the focus puller with technology in the last few years. In 2014, there was a new focus system from Andra at the NAB Show.

The idea was to pre-visualize your focus by implementing a magnetic field, usually giving you a 24ft x 16ft area of motion caption coverage. This enabled extremely accurate positional data, placing actors with sensors within a volume where the camera would read the data.

The positional data streamed to the focus pulling motors on the camera to automatically pull focus. You would place sensors on the areas where you want to focus—whether on an actor or a prop—something that’s moving, basically. You can also set up several sensors within the system beforehand.

Now you prepare your focus plan. Add offsets to the sensor so you can finitely place your focus. You can let the system pull focus for you, allowing you to decide when and how fast to move between subjects, or you can use the data to pull manually.

Redrock Micro showed their Hālo focusing product at the same NAB Show. This system uses the same technology the automobile industry uses for collision detection and avoidance.

The Hālo Explorer creates a real-time scene map, combining pinpoint accuracy with up to 180 degrees of view. The AI identifies all your subjects (people and objects) and tracks their distance and location.

The Halo autofocus system from Redrock Micro
The Hālo autofocus system from Redrock Micro. Image via Redrock Micro.

The user interface shows a bird’s-eye view of all subjects and enables anyone to tap-to-focus or drag to follow focus with visual, audible, and haptic feedback.

Hālo becomes the technician, handling the focus so operators can concentrate on the creative performance.

A third system is technology adopted by CMotion, but originated from a company called Qinematiq from Austria. It’s now called the Cvision focus assist and uses depth mapping to secure its focus points.

The system is based on two depth mapping cameras, which hang onto the front of your capture camera and give you a 62˚ view. The two cameras produce a stereoscopic depth map, which allows the measurement of 250,000 data point clouds in real-time at 30fps.

In effect, you can see the depth of field through the tablet that comes with the system.

This system isn’t designed to replace a focus puller. Rather, it’s assistive technology for the focus puller. What you’ll see is what is in focus, what’s behind, and in front through different colors.

So, in effect, Cvision gives you autofocus as you’re measuring from point to point for tracking subjects and pinpointed focus points.


Autofocus for the Content Creator

Autofocus for the content creator is arguably much simpler and cheaper than the professional tools mentioned above, especially now that mirrorless cameras have arrived.

DSLRs used to have their autofocus sensors placed under the mirror. Now however, autofocus sensors are placed on the optical sensor itself. This provides autofocus assistance that was unheard of a few years ago.

The use of controlled volumes or depth information isn’t practical for free-roaming cameras, but it makes perfect sense for controlled environments like sets. We’ll see more of that kind of immersion and point cloud use.

Computational math will also flourish with smartphones like the iPhone 13, which uses real-time mask creation to blur backgrounds. At the moment, you have to be careful which focal length you use for it to work as it’s been designed to, but this will only improve with new iPhone models.

But companies like Canon, Sony, Nikon, and others are providing camera autofocus aids that are now in the top five essential features to have. The AF battles are a market within a market for these companies.

Canon borrowed its sensor-based, phase detection autofocus dual-pixel feature from its professional side, which features in the majority of its new cameras down to its Rebel T7i model.

Pros, especially ones that work solo, loved Canon’s innovations. It enabled them to trust their autofocus and concentrate more on their framing and composition.

The CMOS sensor’s pixels each have two photodiodes that operate either independently of each other or together. Each diode has a separate lens over it.

When light hits the lenses and the diodes, the processor analyzes each diode for focus. Once focus is achieved, the signals are combined to record the image.

So, the pixels have this dual role which is unique in the industry, although Panasonic has its own version called DFD (Depth from Defocus). 80% of Canon’s sensors have their pixels operating in dual pixel mode.

Real-Time Eye AF is fully-supported, depending on what you’re shooting—that means human, animal, and bird eyes. Sony seems to have the best reputation for their AF, at the moment. The camera’s AF menu settings are so deep that you can import specially implemented ones from the more famous influencers (see the YouTube video above).

My favorite feature is the AF transition speed setting. This has a touch of cinema and narrative storytelling about it as you can rack focus across seven speed settings—you can see how you could build or enhance a scene using that feature.

But, Nikon’s new machine learning on their Z9‘s 3D tracking includes humans, animals, vehicles, planes, bikes, and cars.

Nikon also has a customizable focus area control, which allows you to map a scene for possible dramatic effect. For example, you can track something in the lower frame as the only thing to be in focus.

More details on Nikon’s new autofocus below:


The Autofocus Future

AF is a huge deal for content creators. We’ve only touched the surface of the multitude of features available. Brands are racing to develop even more flavors to improve the tech.

Computational photography will no doubt throw up some new ideas as processors get more powerful. We may even see some depth mapping appear as it does on the new iPhones with LiDAR.

For instance, when Canon’s R3 was launched last year, we saw Eye Control AF for the first time in a mirrorless camera—as in, you can focus on what you’re looking at.

Basically, sensors will follow your eyeballs and calculate the focus within seconds. Apparently, it works well but relies on a short calibration on the scene you’re shooting. With six calibration memory presets available, you can see how dedicated shooters might welcome such technology.

Whatever happens, autofocus innovation is unstoppable, and each brand will have to keep up. This is great news for the content creator, indie filmmaker, and prosumer.

However, with this innovation the question remains—will a Focus Puller become a role of the past for large productions, or will the tech be so niche and difficult to use that the position will be needed for many years to come?


For gear inspo, take a look at these articles:

Cover image by Marina Veder

A