Photography rebooted
From time to time, a new technology comes along. Someone finds a different and better [cheaper or easier or faster or smaller …] way to do something. This results in a major shakeup of a market and perhaps a whole industry. It is not uncommon for the market leaders to be sluggish in their response to a new opportunity and a newcomer to start dominating the business. Technologies like this are termed “disruptive”.
Apart from embedded software, I have a number of interests and one of those is photography. This is certainly a market which has been shaken by a disruptive technology and is set to get another shake up very soon …
For over 100 years, film photography was the dominant still [and moving] image recording technology. Then, about 15 years ago, digital began to get a grip and there has been no stopping it. I bought my first “serious” digital camera in 2003 and have not used a roll of film since. I doubt whether I ever will do so again. As my main interest is taking pictures, rather than necessarily the underlying technology, the immediacy of digital just makes my photography more fun.
I guess this was a disruptive technology, as it’s introduction changed all the rules. A typical film camera consisted of: optics + mechanics + a bit of electronics [maybe]. On the other hand a digital camera is: optics + a lot of electronics + a bit of mechanics [perhaps]. [You might argue that a DSLR has a lot of mechanics, but I regard these machines as over-priced throw-backs, which are unlikely to persist in their current form.]
It is this mix of technologies that has changed and caused the disruption. The result is that many camera companies have disappeared – most notably Kodak. Some have adapted and continued to flourish – notably Canon and Nikon – though they do cling on to conventional DSLR technology and have not enthusiastically embraced the “mirrorless” approach. Other companies – and I have in mind Sony and Panasonic as examples – who are strong in electronics, have risen to be market leaders.
But I think that it is all going to change again. Most keen [digital] photographers like to shoot in RAW. This means that their cameras simply store the data that comes out of the sensor, instead of processing the image to be a JPEG. This means that the photographer has much more latitude later to make decisions and fix problems on the computer. This level of image control is very appealing. I guess that they are very much on a par with film workers in the darkroom.
At the fundamental level, things have not changed. The camera focuses an image onto something, which stores it for later reproduction. The photographer chose what was in focus – he selected the focal plane [where the image is totally sharp] and the depth of field [the distance in front and behind the focal plane that is also acceptably sharp]. This is the way that all cameras work and have worked since they were first invented. Until now …
Enter the light ray camera. This type of device – which is on the market now – works on a different principle. It simply gathers and stores a digital representation of all the light rays that enter the lens. Later, this can be uploaded to the computer and the photographer can then locate the focal plane and decide on the depth of field. I see it as a kind of “RAW on steroids”.
As I see it, this has the potential to change photography for ever. I want one of these cameras. Now. Currently, they only ship to the US, but I can easily get around this. The only thing that is stopping me is that the special image processing software, that lets you set the focus parameters and convert to a conventional format, is only available on Mac and I do not have one [and have no other immediate incentive to get one]. Apparently a Windows version is coming. I will wait.
If you have tried a light field camera, please comment or email and share your impressions.
Comments
Leave a Reply
You must be logged in to post a comment.
There was a company (Raytrix, http://www.raytrix.de/) at the GPU Technology Conference demonstrating a light field camera. They had a really good description of how it works that I hadn’t seen before: The sensor is covered with an array of little wide-angle lenses, so you get a raw photograph that looks a lot like an image through an insect eye with little copies of the whole image from slightly different perspectives. They then use a lot of software (running on a GPU, naturally!) to correlate all of the micro-images to make up a higher-resolution image with 3D depth information. (They also have a PDF up about it, here: http://www.raytrix.de/index.php/Technology.html )
They were claiming some significant advantages of this over the usual multi-camera methods. The big one is that there’s no need to synchronize multiple cameras, so don’t have to worry about motion between the “left” and “right” images, or registering the images. They were demonstrating getting 3D video out of it.
It’s another example of how adding compute power to a sensor system can give you a much more powerful sensor.
That’s interesting Brooks. Clearly there are a number of companies working on different approaches to light field photography.