Subscribe free to our newsletters via your
. 24/7 Space News .

Engineers unlock remarkable 3-D vision from ordinary digital camera technology
by Staff Writers
Washington DC (SPX) Sep 18, 2015

illustration only

Modern digital cameras are equipped with an impressive array of functions - from autofocus and image stabilization to panoramas and high-definition video. Recently a team of engineers from Duke University has unlocked a previously unrecognized 3D imaging capability of modern cameras by simply repurposing its existing components.

This new capability was successfully demonstrated in a proof-of-concept laboratory experiment using a small deformable mirror --a reflective surface that can direct and focus light. The research demonstrates how the equivalent technology in modern digital cameras, the image stabilization and focus modules, could be harnessed to achieve the same results without additional hardware.

The purpose of the experiment was to extract depth-of-field information from a "single shot" image - rather than traditional 3D imaging techniques that require multiple images- without suffering any trade-offs in image quality. When integrated into commercial cameras and other optical technologies, this visualization technique could improve core functions, like image stabilization, and increase the speed of autofocus, which would enhance the quality of photographs.

"Real scenes are in three dimensions and they're normally captured by taking multiple images focused at various distances," said Patrick Llull, Duke Imaging and Spectroscopy Program (DISP), Duke University. "A variety of single-shot approaches to improve the speed and quality of 3D image capture has been proposed over the past decades. Each approach, however, suffers from permanent degradations in 2D image quality and/or hardware complexity."

The research team, led by David Brady, a professor at Duke, was able to overcome these hurdles, developing an adaptive system that may accurately extract 3D data while maintaining the ability to capture a full-resolution 2D image without a dramatic system change, such as switching out a lens.

Brady and his team present their findings in Optica, the high-impact, Open Access high-impact journal from The Optical Society.

A New Path to the Third Dimension
Humans are able to see in three dimensions by a process known as parallax, in which the information received by each eye is slightly offset from the other. The brain is able to interpret and process these slightly divergent signals, recognizing how the apparent displacement as seen by each eye relates to different distances. This allows humans to perceive depth.

Traditional 3D imaging relies on virtually the same principle in which images and scenes are recorded with two slightly off-set lenses. When projected or processed, the original 3D appearance is restored. This recording process, however, requires twice the data as a 2D image, making 3D photography and video more bulky, expensive, and data intensive.

"We want to achieve the same results with the equipment people already have in their handheld cameras with no major hardware modifications," noted Llull.

Stabilization to Recover Information at Depth
Modern digital cameras, especially those with video capabilities, are frequently equipped with modules that take the jitter out of recordings. They do this by measuring the inertia or motion of the camera and compensate by rapidly moving the lens - making multiple adjustments per second - in the module. This same hardware can also change the image capture process, recording additional information about the scene. With proper software and processing, this additional information can unlock the otherwise hidden third dimension.

The first step, according to the researchers, is to enable the camera to record 3D information. This is achieved by programming the camera to performing three functions simultaneously: sweeping through the focus range with the sensor, collecting light over a set period of time in a process called integration, and activating the stabilization module.

As the optical stabilization is engaged, it wobbles the lens to move the image relative to a fixed point. This, in conjunction with a focal sweep of the sensor, integrates that information into a single measurement in a way that preserves image details while granting each focus position a different optical response. The images that would have otherwise been acquired at various focal settings are directly encoded into this measurement based on where they reside in the depth of field.

For the paper, the researchers used a comparatively long exposure time to compensate for the set-up of the equipment. To emulate the workings of a camera, a beam splitter was necessary to control the deformable lens: This extra step sacrifices about 75 percent of the light received. "When translated to a fully integrated camera without a beamsplitter, this light loss will not be an issue and much faster exposure times will be possible," noted Llull.

The researchers then process a single exposure taken with this camera and obtain a data-rich product known as a data cube, which is essentially a computer file that includes both the all-focused 2D image as well as an extra element known a depth map. This depth map data, in effect, describes the focus position of each pixel of the image. Since this information is already encoded into the single measurement, it's possible to construct a depth map for the entire scene.

The final step is to process the image and depth map with a commercial 3D graphics engine, similar to those that render 3D scenes in video games and computer-generated imagery used in Hollywood movies. The resulting image can be used to determine the optimal focal setting for subsequent full-resolution 2D shots, as an autofocus algorithm does, but from only one image. Additionally, synthetic refocusing may be used on the resulting 3D imagery to display the scene as viewed at different depths by a human.

Though only performed in laboratory settings with surrogate technologies, the researchers believe the techniques they employed could be applied to basic consumer products. The result would be a more efficient autofocusing process, as well as the added third dimension to traditional photography.

"We have found a new path to extract 3D information from an otherwise 2D process. The benefits of this are dual functionality of tomographic imaging and full resolution 2D capture with little modification to existing systems," concluded Llull.

Paper: "Image translation for single-shot focal tomography," Patrick Llull et al., Optica, 2, 9, 822 (2015)

Thanks for being here;
We need your help. The SpaceDaily news network continues to grow but revenues have never been harder to maintain.

With the rise of Ad Blockers, and Facebook - our traditional revenue sources via quality network advertising continues to decline. And unlike so many other news sites, we don't have a paywall - with those annoying usernames and passwords.

Our news coverage takes time and effort to publish 365 days a year.

If you find our news sites informative and useful then please consider becoming a regular supporter or for now make a one off contribution.
SpaceDaily Contributor
$5 Billed Once

credit card or paypal
SpaceDaily Monthly Supporter
$5 Billed Monthly

paypal only


Related Links
The Optical Society
Space Technology News - Applications and Research

Comment on this article via your Facebook, Yahoo, AOL, Hotmail login.

Share this article via these popular social media networks DiggDigg RedditReddit GoogleGoogle

Memory Foam Mattress Review
Newsletters :: SpaceDaily :: SpaceWar :: TerraDaily :: Energy Daily
XML Feeds :: Space News :: Earth News :: War News :: Solar Energy News

Japanese paper art inspires new 3-D fabrication method
Chicago IL (SPX) Sep 14, 2015
A cut or tear in a material is typically a sign of weakness. Now, a Northwestern University, University of Illinois and Tsinghua University research team has created complex 3-D micro- and nanostructures out of silicon and other materials found in advanced technologies using a new assembly method that uses cuts to advantage. The Kirigami method builds on the team's "pop-up" fabrication tec ... read more

Moon's crust as fractured as can be

China aims to land Chang'e-4 probe on far side of moon

China Plans Lunar Rover For Far Side of Moon

Russia Eyes Moon for Hi-Tech Lunar Base

Team Continues to Operate Rover in RAM Mode

Ridley Scott's 'The Martian' takes off in Toronto

Mars Panorama from Curiosity Shows Petrified Sand Dunes

Sweeping over the south pole of Mars

Russian cosmonaut back after record 879 days in space

New Life for Old Buddy: Russia Tests Renewed Soyuz-MS Spacecraft

Opportunity found in lack of diversity in US tech sector

Boeing Revamps Production Facility for Starliner Flights

Progress for Tiangong 2

China rocket parts hit villager's home: police, media

China's "sky eyes" help protect world heritage Angkor Wat

China's space exploration potential has US chasing its own tail

ISS Crew Enjoy Kharcho Soup, Mare's Milk in Orbit

Slam dunk for Andreas in space controlling rover on ground

Russian ISS Crew's Next Spacewalk Planned for February 2016

Mogensen begins busy ISS tour

First Ever Launch Vehicle to Be Sent to Russia's New Spaceport in Siberia

US Navy to Launch Folding-Fin Ground Attack Rocket on Scientific Mission

US Launches Atlas V Rocket With Navy Communications Satellite After Delay

FCube facility enters operations with fueling of Soyuz Fregat upper stage

Earth observations show how nitrogen may be detected on exoplanets, aiding search for life

Distant planet's interior chemistry may differ from our own

Earth's mineralogy unique in the cosmos

A new model of gas giant planet formation

First new cache-coherence mechanism in 30 years

One step closer to a new kind of computer

Researchers develop 'instruction manual' for futuristic metallic glass

DARPA seeks new composite process for making small parts

The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement All images and articles appearing on Space Media Network have been edited or digitally altered in some way. Any requests to remove copyright material will be acted upon in a timely and appropriate manner. Any attempt to extort money from Space Media Network will be ignored and reported to Australian Law Enforcement Agencies as a potential case of financial fraud involving the use of a telephonic carriage device or postal service.