Computational Photography

Roughly mid 1986 digital photography was ready for the professional market. I still remember, the digital cameras had a small sensor (smaller than the analog equivalence of 35mm), they were very expensive, and their resolution was poor compared to film.

iPhone 11 Pro: three cameras on the rear side

Nowadays, everybody has at least one digital camera in their smartphone. Sometimes, even three or more. It means that they are affordable, and, furthermore, the resolution is very good. Digital photography is all around us (just look at the social media). Also, the serious-hobbyist market for full frame cameras with 35 mm sensors is growing substantially.

With the start of digital photography a lot of software tools were developed to take over the dark room activities. With tools like Adobe Lightroom and Photoshop one can manipulate images as a whole or even individual pixels. It is really amazing what these software tools can do nowadays.

Most of these software tools run on a desktop computer, although more limited versions are already available on smartphones or tablets. Some of the image processing operations, however, really need substantial compute power, that is why they need desktop computers.

Small depth-of-field: vague background

With a camera where you can adjust the aperture of the lens it is possible to create a vague background (and foreground). This is especially nice when you want to focus attention on a subject like a person in portrait photography (on the right). The optical features of the lens create a small in focus zone (depth-of-field), and the rest is out of focus (vague). This effect is strongest with a light-intensive telelens. The picture of the right was taken with a Nikkor 135mm lens with aperture f/2.8

With a smartphone this vague background cannot be achieved optically. Lenses in smartphones are basically wide-angle lenses (in focus zone is large). This is where computational photography comes in. The idea is as follows. The software detects the contour of the subject that should be in focus (image processing). The rest, the background, is made vague by the software. So, the essential part is detecting the contour. And it should be done very fast and with the limited compute power of a smartphone.

Beak of the bird has been wrongly made vague

As you can see the image processing software has a hard time detecting the contour of the bird made of glass. The software cannot detect the beak and makes it vague. Also, parts of the bird on the left are sharp and other parts vague. Although the shortcomings of the current software are obvious, I expect substantial improvements in the near future. Look at the achievements with High-Dynamic-Range images and panorama images. My expectation is that in five years time smartphones are able to make as good a picture as some of the more advanced DSLR cameras.