Your next phone camera might be able to sense depth

0
270
Image from www.shebytes.com
Image from www.shebytes.com

The camera in your smartphone may soon have a new trick: depth perception. Toshiba, Samsung and Silicon Valley startup Pelican Imaging are developing image sensors and software that would allow cameras to detect the distance of objects within the scenes they photograph.

The new camera technology could start showing up in smartphones as soon as this year. Initially, it will allow consumers to do things like refocus their pictures after they take them, much like they can with Lytro’s “light-field” camera today. But because of their small size and, in some cases, high resolution, the new cameras could also be used in a wide range of other applications. In the future, they could be employed in new, more precise versions of Microsoft’s Kinect, the gesture-sensing game controller; in cars as collision-preventing backup cameras; as identification systems that can precisely distinguish individual faces; and in a kind of three-dimensional scanner for 3-D printing. “This type of technology is the next big thing for imaging,” said Chris Chute, an imaging analyst at research firm IDC. I recently met with Pelican and got a demonstration of its technology. The company has designed a chip that, instead of containing a solitary image sensor, has an array of 16 or even 20 of them. To get depth information, the chip essentially employs the principle of parallax, the same basic method astronomers use to measure the distance to nearby stars. Pelican’s software is able to determine the distance to a particular point in an image by using the known distance between its multiple sensors and the sensors’ viewing angles to that point. Pelican’s software takes the images recorded by each of the individual sensors in its array and combines them, yielding not only an 8 megapixel image, but one with depth information for each point within it. The company’s technology appears to be an improvement on what’s come before. Pelican’s system yields both a high-resolution image, which Lytro’s camera doesn’t do, and a high-resolution depth map, which the Kinect can’t match.

Unlike Kinect and similar systems, Pelican’s technology doesn’t involve bouncing light or lasers off an object. That should make it perform much better outdoors in bright light situations, which tend to trouble the Kinect. And unlike other depth-cameras, Pelican’s system is being designed from the ground up to be small enough to fit into a smartphone.
At our meeting, Pelican CEO Chris Pickett and marketing Vice President Paul Gallagher demonstrated how they could take a picture with the company’s camera module and change the focus of the image after the fact. They could focus on a person in the foreground or something in the background, or have everything be in focus.
While it will be cool to have that capability in a smartphone, you already can do the same things with a Lytro camera.
But Pelican’s camera has other potential uses. For example, it could allow users to easily select an object in an image to adjust its exposure or copy it to another picture. It could allow users to interact with their phones with 3-D gestures. And it could be used in for face-detection systems that determine whether someone is authorized to use a device.
Pelican says that it’s already working with some of the existing smartphone camera system makers and hopes to have its system on a smartphone by early next year.
But cameras like Pelican’s could also have some important uses outside of phones. Many cars now have backup cameras and some have proximity sensors that warn drivers when they are about to hit something. A single Pelican camera could serve both functions.
They could also be used inside cars. Gallagher noted that cars include sensors that determine when an air bag should be deployed. Currently, those sensors detect only weight. They can’t detect whether a rider’s feet are propped up on the dash or whether the person is leaning forward to look in the glove compartment, both of which could be dangerous positions if an air bag were to deploy. But a car’s safety system could use information from a depth camera to determine riders’ positions and whether it would be safe to deploy an air bag.
I don’t know whether Pelican’s technology – or that of its rivals – will take off in the market. But it’s exciting to picture the possibilities.
A new kind of camera for your phone
Local startup Pelican Imaging has developed a camera system for smartphones that senses depth as well as light.
-What it does: When taking a photograph, it records the distance from the camera to each point in the picture.
-How it works: The image chip includes an array of 16 or 20 low-resolution image sensors. Pelican’s software combines the low-resolution images to form a high-resolution composite and gleans depth information through the principle of parallax, the mathematical method astronomers use to find the distance to nearby stars.
-What you can do with it: Smartphone users could refocus images after they shoot them or easily select objects within a photo and copy them to another. The system could potentially be used in face-detection and car safety systems. -When it will be available: Pelican hopes to have its system in smartphones early next year. Similar sensors from Toshiba or Samsung could be in smartphones sooner. (Troy Wolverton/Phys.org)