Always On

My funny friend Neven once wisely said, “The best camera is the one you have with you and which has a f/1.4 normal prime.” He was using humor, of course, to illustrate that while the adage about convenience of an available camera trumping the quality of an unavailable one may be logical, it sucks to be caught in a moment without the best equipment you own.

That said, my favorite camera is always with me because it’s the one on my iPhone 3GS. I say this not only because of the new focus and exposure controls that can elevate my pictures from snapshots to photographs, not only because it shoots video and I feel vindicated in having resisted buying a Flip, but now for a new reason: I have a theory about an unannounced feature in the iPhone 3GS camera that if true, would be pretty astoundingly slick, for it would mean the difference between missing the moment and getting the picture you intended to get. And it’s a theory that’s easy to test, but whose corroboration I haven’t found from Apple or anywhere else.

Maybe you remember using the pre-3GS iPhone Camera app for the first time. Maybe, like me, you were confused and put off by the gigantic dorky shutter, the seemingly unending animation of a clunky iris closing and opening every time you took a shot. And you’ve known the heartbreak of missing the perfect picture because the damn thing took so long to work. It’s something you got used to over time. Even consumer point-and-shoots have some latency in their mechanism. It’s a fact of life in the age of digital photography.

But a couple days ago, I snapped a photo of my dog sleeping. Diggy is a light sleeper, but he looked so friggin’ cute there was no way I wasn’t going to at least try to show him off to my flickr friends. So I take my iPhone from my pocket, I frame the shot, and just as I’m tapping the shutter button, of course he wakes up and jumps off the couch. Of course he does, because he hates me and he hates my flickr friends.

But then something strange happened. I looked at the image I’d taken and there he was, sleeping, cute as I’d remembered from those seconds ago. Hence, my theory: something smart is happening here. My hypothesis: from the moment you launch the Camera app, data is not only streaming to the viewer, but being cached to memory at full resolution, much like a TiVo with a live broadcast. Where there’s been latency in previous versions of the iPhone hardware/software due to processing limitations, those limitations have been overcome in the iPhone 3GS, closing the gap between intention and result by processing the streaming input from a microsecond before the shutter was released. In essence, the iPhone is constantly storing the picture you want before you even take it.

Doesn’t it make sense that the same hardware which makes the video camera possible in the iPhone 3GS (although currently capturing at only VGA 640x480 resolution) would be put to use for such a feature?

A few years ago, Panasonic introduced its industry standard prosumer HD camera, the HVX200 with a neato attention-grabbing feature called Prerec Mode, which would save video to a 3-second memory buffer even before the start button was pressed, so as never to miss a moment. If my theory is correct, Apple has co-opted this pro feature for its own, taking an already-capable camera and making it that much better, without even so much as a press release or a feature trademark.

Go ahead and test it yourself. Take a picture of a stopwatch if you’ve got time on your hands, or just train your camera on an object and quickly pan off as you release the shutter. You should see the object of focus without motion blur. If you have an iPhone 3G to compare it to, you’ll notice the sizeable discrepancy between the two.

There’s no doubt there’s something smart going on, and I guess I just wanted the opportunity to say to the engineers at Apple that I see what you did there and I think it’s awesome.