
“When you look at the design process,” says Manzari, “we begin with a deep reverence and respect for image and filmmaking through history. We were just curious - what is it about filmmaking that’s been timeless? And that kind of leads down this interesting road and then we started to learn more and talk more … with people across the company that can help us solve these problems.”ĭrance says that before development began, Apple’s design team spent time researching cinematography techniques for realistic focus transitions and optical characteristics. In fact, he says, it’s typically the opposite inside of this design team at Apple.

They also didn’t want to sacrifice live preview - something that most Portrait Mode competitors took years to ship after Apple introduced it.īut the concept of Cinematic Mode didn’t start with the feature itself, says Manzari. The A15 Bionic and Neural Engine are heavily used in Cinematic Mode, especially given that they wanted to encode it in Dolby Vision HDR as well. Rendering these autofocus changes in real time is a heavy computational workload.”

And that meant we would need even higher-quality depth data so Cinematic Mode could work across subjects, people, pets and objects, and we needed that depth data continuously to keep up with every frame.

“Unlike photos, video is designed to move as the person filming, including hand shake. “We knew that bringing a high-quality depth of field to video would be magnitudes more challenging ,” says Drance.
