. 24/7 Space News .
ROBO SPACE
Light processing improves robotic sensing, study finds
by Staff Writers
Aberdeen Providing Ground MD (SPX) Sep 15, 2020

Examples of high dynamic range luminance in views of a cave opening, where combinations of indoor and outdoor luminance can exceed a 10,000-to-1 maximum-to-minimum luminance ratio. The scene at the right is a blended image across multiple exposures, illustrating the human ability to see multiple targets (three uniforms and one car) across vast luminance differences in the same view.

A team of Army researchers uncovered how the human brain processes bright and contrasting light, which they say is a key to improving robotic sensing and enabling autonomous agents to team with humans.

To enable developments in autonomy, a top Army priority, machine sensing must be resilient across changing environments, researchers said.

"When we develop machine vision algorithms, real-world images are usually compressed to a narrower range, as a cellphone camera does, in a process called tone mapping," said Andre Harrison, a researcher at the U.S. Army Combat Capabilities Development Command's Army Research Laboratory. "This can contribute to the brittleness of machine vision algorithms because they are based on artificial images that don't quite match the patterns we see in the real world."

By developing a new system with 100,000-to-1 display capability, the team discovered the brain's computations, under more real-world conditions, so they could build biological resilience into sensors, Harrison said.

Current vision algorithms are based on human and animal studies with computer monitors, which have a limited range in luminance of about 100-to-1, the ratio between the brightest and darkest pixels. In the real world, that variation could be a ratio of 100,000-to-1, a condition called high dynamic range, or HDR.

"Changes and significant variations in light can challenge Army systems - drones flying under a forest canopy could be confused by reflectance changes when wind blows through the leaves, or autonomous vehicles driving on rough terrain might not recognize potholes or other obstacles because the lighting conditions are slightly different from those on which their vision algorithms were trained," said Army researcher Dr. Chou Po Hung.

The research team sought to understand how the brain automatically takes the 100,000-to-1 input from the real world and compresses it to a narrower range, which enables humans to interpret shape. The team studied early visual processing under HDR, examining how simple features like HDR luminance and edges interact, as a way to uncover the underlying brain mechanisms.

"The brain has more than 30 visual areas, and we still have only a rudimentary understanding of how these areas process the eye's image into an understanding of 3D shape," Hung said.

"Our results with HDR luminance studies, based on human behavior and scalp recordings, show just how little we truly know about how to bridge the gap from laboratory to real-world environments. But, these findings break us out of that box, showing that our previous assumptions from standard computer monitors have limited ability to generalize to the real world, and they reveal principles that can guide our modeling toward the correct mechanisms."

The Journal of Vision published the team's research findings, Abrupt darkening under high dynamic range (HDR) luminance invokes facilitation for high contrast targets and grouping by luminance similarity.

Researchers said the discovery of how light and contrast edges interact in the brain's visual representation will help improve the effectiveness of algorithms for reconstructing the true 3D world under real-world luminance, by correcting for ambiguities that are unavoidable when estimating 3D shape from 2D information.

"Through millions of years of evolution, our brains have evolved effective shortcuts for reconstructing 3D from 2D information," Hung said. "It's a decades-old problem that continues to challenge machine vision scientists, even with the recent advances in AI."

In addition to vision for autonomy, this discovery will also be helpful to develop other AI-enabled devices such as radar and remote speech understanding that depend on sensing across wide dynamic ranges.

With their results, the researchers are working with partners in academia to develop computational models, specifically with spiking neurons that may have advantages for both HDR computation and for more power-efficient vision processing - both important considerations for low-powered drones.

"The issue of dynamic range is not just a sensing problem," Hung said.

"It may also be a more general problem in brain computation because individual neurons have tens of thousands of inputs. How do you build algorithms and architectures that can listen to the right inputs across different contexts? We hope that, by working on this problem at a sensory level, we can confirm that we are on the right track, so that we can have the right tools when we build more complex AIs."

Research paper


Related Links
US Army Research Laboratory
All about the robots on Earth and beyond!


Thanks for being there;
We need your help. The SpaceDaily news network continues to grow but revenues have never been harder to maintain.

With the rise of Ad Blockers, and Facebook - our traditional revenue sources via quality network advertising continues to decline. And unlike so many other news sites, we don't have a paywall - with those annoying usernames and passwords.

Our news coverage takes time and effort to publish 365 days a year.

If you find our news sites informative and useful then please consider becoming a regular supporter or for now make a one off contribution.
SpaceDaily Monthly Supporter
$5+ Billed Monthly


paypal only
SpaceDaily Contributor
$5 Billed Once


credit card or paypal


ROBO SPACE
Educated yet amoral: AI capable of writing books sparks awe
Paris (AFP) Sept 2, 2020
An artificial intelligence (AI) technology made by a firm co-founded by billionaire Elon Musk has won praise for its ability to generate coherent stories, novels and even computer code but it remains blind to racism or sexism. GPT-3, as Californian company OpenAI's latest AI language model is known, is capable of completing a dialogue between two people, continuing a series of questions and answers or finishing a Shakespeare-style poem. Start a sentence or text and it completes it for you, basin ... read more

Comment using your Disqus, Facebook, Google or Twitter login.



Share this article via these popular social media networks
del.icio.usdel.icio.us DiggDigg RedditReddit GoogleGoogle

ROBO SPACE
Boeing's Starliner makes progress ahead of flight test with astronauts

NASA seeks next class of Flight Directors for human spaceflight missions

The Seventh Meeting of the Japan-U.S. Comprehensive Dialogue on Space: Joint Statement

Russian cosmonaut sheds light on how ISS crew deals with suspected air leak

ROBO SPACE
India eyes hypersonic cruise missile with domestically-made scramjet engine

Rocket Lab Granted FAA Operator License for Missions from Launch Complex 2

With DUST-2 launch, NASA's sounding rocket program is back on the range

Plasma propulsion for small satellites

ROBO SPACE
The ERC 2020 shows how to adapt in a post-pandemic world

Surprise on Mars

NASA Readies Perseverance Mars Rover's Earthly Twin

Nereidum Montes a mountain landscape formed by water, ice and wind

ROBO SPACE
China's reusable spacecraft returns to Earth after 2 days

Mars-bound Tianwen 1 hits milestone

China's Mars probe over 8m km away from Earth

China seeks payload ideas for mission to moon, asteroid

ROBO SPACE
GMV announces the merger of its UK Company and NSL

Kepler reports successful launch of third satellite

Gogo announces entry into agreement to sell its Commercial Aviation unit to Intelsat for $400M in Cash

Satellite constellations could hinder astronomical research, scientists warn

ROBO SPACE
Microsoft says small Xbox S game console on the way

Next artificial intelligence mission selected

Unilever to cut carbon footprint in cleaning items

Morocco, Netherlands, India, UAE to buy Longbow Fire Control Radars

ROBO SPACE
SETI Institute and GNU Radio join forces

New observations show planet-forming disc torn apart by its three central stars

Did meteorite impacts help create life on Earth and beyond

Manchester experts' breakthrough narrows intelligent life search in Milky Way

ROBO SPACE
Technology ready to explore subsurface oceans on Ganymede

Large shift on Europa was last event to fracture its surface

The Sun May Have Started Its Life with a Binary Companion

Ganymede covered by giant crater









The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.