Subscribe free to our newsletters via your
. 24/7 Space News .

New algorithm aids in both robot navigation and scene understanding
by Larry Hardesty for MIT News
Boston MA (SPX) Apr 07, 2014

File image.

Suppose you're trying to navigate an unfamiliar section of a big city, and you're using a particular cluster of skyscrapers as a reference point. Traffic and one-way streets force you to take some odd turns, and for a while you lose sight of your landmarks. When they reappear, in order to use them for navigation, you have to be able to identify them as the same buildings you were tracking before - as well as your orientation relative to them.

That type of re-identification is second nature for humans, but it's difficult for computers. At the IEEE Conference on Computer Vision and Pattern Recognition in June, MIT researchers will present a new algorithm that could make it much easier, by identifying the major orientations in 3-D scenes. The same algorithm could also simplify the problem of scene understanding, one of the central challenges in computer vision research.

The algorithm is primarily intended to aid robots navigating unfamiliar buildings, not motorists navigating unfamiliar cities, but the principle is the same. It works by identifying the dominant orientations in a given scene, which it represents as sets of axes - called "Manhattan frames" - embedded in a sphere.

As a robot moved, it would, in effect, observe the sphere rotating in the opposite direction, and could gauge its orientation relative to the axes. Whenever it wanted to reorient itself, it would know which of its landmarks' faces should be toward it, making them much easier to identify.

As it turns out, the same algorithm also drastically simplifies the problem of plane segmentation, or deciding which elements of a visual scene lie in which planes, at what depth. Plane segmentation allows a computer to build boxy 3-D models of the objects in the scene - which it could, in turn, match to stored 3-D models of known objects.

Julian Straub, a graduate student in electrical engineering and computer science at MIT, is lead author on the paper. He's joined by his advisors, John Fisher, a senior research scientist in MIT's Computer Science and Artificial Intelligence Laboratory, and John Leonard, a professor of mechanical and ocean engineering, as well as Oren Freifeld and Guy Rosman, both postdocs in Fisher's Sensing, Learning, and Inference Group.

The researchers' new algorithm works on 3-D data of the type captured by the Microsoft Kinect or laser rangefinders. First, using established procedures, the algorithm estimates the orientations of a large number of individual points in the scene. Those orientations are then represented as points on the surface of a sphere, with each point defining a unique angle relative to the sphere's center.

Since the initial orientation estimate is rough, the points on the sphere form loose clusters that can be difficult to distinguish. Using statistical information about the uncertainty of the initial orientation estimates, the algorithm then tries to fit Manhattan frames to the points on the sphere.

The basic idea is similar to that of regression analysis - finding lines that best approximate scatters of points. But it's complicated by the geometry of the sphere.

"Most of classical statistics is based on linearity and Euclidean distances, so you can take two points, you can sum them, divide by two, and this will give you the average," Freifeld says. "But once you are working in spaces that are nonlinear, when you do this averaging, you can fall outside the space."

Consider, for instance, the example of measuring geographical distances. "Say that you're in Tokyo and I'm in New York," Freifeld says. "We don't want our average to be in the middle of the Earth; we want it to be on the surface." One of the keys to the new algorithm is the fact it incorporates these geometries into the statistical reasoning about the scene.

In principle, it would be possible to approximate the point data very accurately by using hundreds of different Manhattan frames, but that would yield a model that's much too complex to be useful. So another aspect of the algorithm is a cost function that weighs accuracy of approximation against number of frames.

The algorithm starts with a fixed number of frames - somewhere between three and 10, depending on the expected complexity of the scene - and then tries to pare that number down without compromising the overall cost function.

The resulting set of Manhattan frames may not represent subtle distinctions between objects that are slightly misaligned with each other, but those distinctions aren't terribly useful to a navigation system. "Think about how you navigate a room," Fisher says.

"You're not building a precise model of your environment. You're sort of capturing loose statistics that allow you to complete your task in a way that you don't stumble over a chair or something like that."

Once a set of Manhattan frames has been determined, the problem of plane segmentation becomes much easier. Objects that don't take up much of the visual field - because they're small, distant, or occluded - make trouble for existing plane segmentation algorithms, because they yield so little depth information that their orientations can't be reliably inferred. But if the problem is one of selecting among just a handful of possible orientations, rather than a potential infinitude, it becomes much more tractable.


Related Links
Massachusetts Institute of Technology
All about the robots on Earth and beyond!

Comment on this article via your Facebook, Yahoo, AOL, Hotmail login.

Share this article via these popular social media networks DiggDigg RedditReddit GoogleGoogle

Memory Foam Mattress Review
Newsletters :: SpaceDaily :: SpaceWar :: TerraDaily :: Energy Daily
XML Feeds :: Space News :: Earth News :: War News :: Solar Energy News

'RoboClam' replicates a clam's ability to burrow while using little energy
Boston MA (SPX) Mar 27, 2014t
The Atlantic razor clam uses very little energy to burrow into undersea soil at high speed. Now a detailed insight into how the animal digs has led to the development of a robotic clam that can perform the same trick. The device, known as "RoboClam," could be used to dig itself into the ground to bury anchors or destroy underwater mines, according to its developer, Amos Winter, the Robert ... read more

Unique camera from NASA's moon missions sold at auction

Expeditions to the Moon: beware of meteorites

A Wet Moon

ASU camera creates stunning mosaic of moon's polar region

The Opposition of Mars

Mars yard ready for Red Planet rover

Mars One building simulated colony to vet potential colonists

Cleaner NASA Rover Sees Its Shadow in Martian Spring

China, Asia-Pacific, will power world tourism: survey

NASA Marks Major Milestone for Spaceport of the Future

NASA Commercial Crew Partners Complete Space System Milestones

High School 'Final Five' Compete for Out-of-This-World Test on Orion

Tiangong's New Mission

"Space Odyssey": China's aspiration in future space exploration

China to launch first "space shuttle bus" this year

China expects to launch cargo ship into space around 2016

Soyuz Docking Delayed Till Thursday as Station Crew Adjusts Schedule

US, Russian astronauts take new trajectory to dock the ISS

Software glitch most probable cause of Soyuz TMA-12 taking two day approach

Russian spacecraft brings three-man crew to ISS after two-day delay

Soyuz ready for Sentinel-1A satellite launch

Boeing wins contract to design DARPA Airborne Satellite Launch

Arianespace's seventh Soyuz mission from French Guiana is readied for liftoff next week

NASA Seeks Suborbital Flight Proposals

Lick's Automated Planet Finder: First robotic telescope for planet hunters

Space Sunflower May Help Snap Pictures of Planets

NRL Researchers Detect Water Around a Hot Jupiter

UK joins the planet hunt with Europe's PLATO mission

Chile quake pushes copper price to three-week high

Space Observation Optics Cover from IR to X-ray Wavelengths

Intel bets big on cloud, with stake in Cloudera

Happily surprised? Sadly angry? Computer tags emotions

The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement All images and articles appearing on Space Media Network have been edited or digitally altered in some way. Any requests to remove copyright material will be acted upon in a timely and appropriate manner. Any attempt to extort money from Space Media Network will be ignored and reported to Australian Law Enforcement Agencies as a potential case of financial fraud involving the use of a telephonic carriage device or postal service.