. 24/7 Space News .
ROBO SPACE
New AI sees like a human, filling in the blanks
by Staff Writers
Austin TX (SPX) May 23, 2019

Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do--take a few quick glimpses around and infer its whole environment.

Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do - take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions.

The team, led by professor Kristen Grauman, Ph.D. candidate Santhosh Ramakrishnan and former Ph.D. candidate Dinesh Jayaraman (now at the University of California, Berkeley) published their results in the journal Science Robotics.

Most AI agents - computer systems that could endow robots or other machines with intelligence - are trained for very specific tasks - such as to recognize an object or estimate its volume - in an environment they have experienced before, like a factory. But the agent developed by Grauman and Ramakrishnan is general purpose, gathering visual information that can then be used for a wide range of tasks.

"We want an agent that's generally equipped to enter environments and be ready for new perception tasks as they arise," Grauman said. "It behaves in a way that's versatile and able to succeed at different tasks because it has learned useful patterns about the visual world."

The scientists used deep learning, a type of machine learning inspired by the brain's neural networks, to train their agent on thousands of 360-degree images of different environments.

Now, when presented with a scene it has never seen before, the agent uses its experience to choose a few glimpses - like a tourist standing in the middle of a cathedral taking a few snapshots in different directions - that together add up to less than 20 percent of the full scene.

What makes this system so effective is that it's not just taking pictures in random directions but, after each glimpse, choosing the next shot that it predicts will add the most new information about the whole scene.

This is much like if you were in a grocery store you had never visited before, and you saw apples, you would expect to find oranges nearby, but to locate the milk, you might glance the other way. Based on glimpses, the agent infers what it would have seen if it had looked in all the other directions, reconstructing a full 360-degree image of its surroundings.

"Just as you bring in prior information about the regularities that exist in previously experienced environments - like all the grocery stores you have ever been to - this agent searches in a nonexhaustive way," Grauman said. "It learns to make intelligent guesses about where to gather visual information to succeed in perception tasks."

One of the main challenges the scientists set for themselves was to design an agent that can work under tight time constraints. This would be critical in a search-and-rescue application. For example, in a burning building a robot would be called upon to quickly locate people, flames and hazardous materials and relay that information to firefighters.

For now, the new agent operates like a person standing in one spot, with the ability to point a camera in any direction but not able to move to a new position. Or, equivalently, the agent could gaze upon an object it is holding and decide how to turn the object to inspect another side of it. Next, the researchers are developing the system further to work in a fully mobile robot.

Using the supercomputers at UT Austin's Texas Advanced Computing Center and Department of Computer Science, it took about a day to train their agent using an artificial intelligence approach called reinforcement learning. The team, with Ramakrishnan's leadership, developed a method for speeding up the training: building a second agent, called a sidekick, to assist the primary agent.

"Using extra information that's present purely during training helps the [primary] agent learn faster," Ramakrishnan said.


Related Links
University of Texas at Austin
All about the robots on Earth and beyond!


Thanks for being there;
We need your help. The SpaceDaily news network continues to grow but revenues have never been harder to maintain.

With the rise of Ad Blockers, and Facebook - our traditional revenue sources via quality network advertising continues to decline. And unlike so many other news sites, we don't have a paywall - with those annoying usernames and passwords.

Our news coverage takes time and effort to publish 365 days a year.

If you find our news sites informative and useful then please consider becoming a regular supporter or for now make a one off contribution.
SpaceDaily Monthly Supporter
$5+ Billed Monthly


paypal only
SpaceDaily Contributor
$5 Billed Once


credit card or paypal


ROBO SPACE
Use of embodied AI in psychiatry poses ethical questions
Munich, Germany (SPX) May 20, 2019
Interactions with artificial intelligence (AI) will become an increasingly common aspect of our lives. A team at the Technical University of Munich (TUM) has now completed the first study of how "embodied AI" can help treat mental illness. Their conclusion: Important ethical questions of this technology remain unanswered. There is urgent need for action on the part of governments, professional associations and researchers. Robot dolls that teach autistic children to communicate better, computer-ge ... read more

Comment using your Disqus, Facebook, Google or Twitter login.



Share this article via these popular social media networks
del.icio.usdel.icio.us DiggDigg RedditReddit GoogleGoogle

ROBO SPACE
NASA Selects Studies for Future Space Communications and Services

NASA Testing Method to Grow Bigger Plants in Space

Oscar Avalos Dreams in Titanium

Space plants project could be astronaut game changer

ROBO SPACE
ESA signs contracts for enhanced Ariane 6 composite upper stage technologies

Rocket Lab to launch rideshare mission for Spaceflight

SpaceX's Dragon Cargo capsule docks with Space Station

SpinLaunch Breaks Ground for New Test Facility at Spaceport America

ROBO SPACE
After the Moon, people on Mars by 2033...or 2060

Exploring life on Mars in the Gobi desert

Is NASA looking at the wrong rocks for clues to Martian life?

Fly over Mount Sharp on Mars

ROBO SPACE
China's satellite navigation industry sees rapid development

China's Yuanwang-7 departs for space monitoring missions

China's tracking ship Yuanwang-2 starts new mission after retirement

China to build moon station in 'about 10 years'

ROBO SPACE
Kleos Space appoints Ground Station Service Provider

SpaceX nears first launch of its Starlink satellites

Maxar Technologies to receive full insurance payout for WorldView-4 loss

New space race to bring satellite internet to the world

ROBO SPACE
Fears rise China could weaponise rare earths in US tech war

A new sensor for light, heat and touch

Louisiana-based Geocent's Advanced Aerospace Materials to Fly Aboard International Space Station

BAE Systems Radiation-hardened Electronics in Orbit a Total of 10,000 Years

ROBO SPACE
NASA Team Teaches Algorithms to Identify Life

Small, hardy planets can survive stellar end sequence

Gravitational forces in protoplanetary disks may push super-Earths close to their stars

Rare-Earth metals in the atmosphere of a glowing-hot exoplanet

ROBO SPACE
Gas insulation could be protecting an ocean inside Pluto

NASA's New Horizons Team Publishes First Kuiper Belt Flyby Science Results

Brazilian scientists investigate dwarf planet's ring

Next-Generation NASA Instrument Advanced to Study the Atmospheres of Uranus and Neptune









The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.