. 24/7 Space News .
TECH SPACE
Rice, Baylor team sets new mark for 'deep learning'
by Staff Writers
Houston TX (SPX) Dec 20, 2016


From left, Richard Baraniuk, Tan Nguyen and Ankit Patel. Image courtesy Jeff Fitlow/Rice University. For a larger version of this image please go here.

Neuroscience and artificial intelligence experts from Rice University and Baylor College of Medicine have taken inspiration from the human brain in creating a new "deep learning" method that enables computers to learn about the visual world largely on their own, much as human babies do.

In tests, the group's "deep rendering mixture model" largely taught itself how to distinguish handwritten digits using a standard dataset of 10,000 digits written by federal employees and high school students.

In results presented this month at the Neural Information Processing Systems (NIPS) conference in Barcelona, Spain, the researchers described how they trained their algorithm by giving it just 10 correct examples of each handwritten digit between zero and nine and then presenting it with several thousand more examples that it used to further teach itself. In tests, the algorithm was more accurate at correctly distinguishing handwritten digits than almost all previous algorithms that were trained with thousands of correct examples of each digit.

"In deep-learning parlance, our system uses a method known as semisupervised learning," said lead researcher Ankit Patel, an assistant professor with joint appointments in neuroscience at Baylor and electrical and computer engineering at Rice.

"The most successful efforts in this area have used a different technique called supervised learning, where the machine is trained with thousands of examples: This is a one. This is a two.

"Humans don't learn that way," Patel said. "When babies learn to see during their first year, they get very little input about what things are. Parents may label a few things: 'Bottle. Chair. Momma.' But the baby can't even understand spoken words at that point. It's learning mostly unsupervised via some interaction with the world."

Patel said he and graduate student Tan Nguyen, a co-author on the new study, set out to design a semisupervised learning system for visual data that didn't require much "hand-holding" in the form of training examples.

For instance, neural networks that use supervised learning would typically be given hundreds or even thousands of training examples of handwritten digits before they would be tested on the database of 10,000 handwritten digits in the Mixed National Institute of Standards and Technology (MNIST) database.

The semisupervised Rice-Baylor algorithm is a "convolutional neural network," a piece of software made up of layers of artificial neurons whose design was inspired by biological neurons.

These artificial neurons, or processing units, are organized in layers, and the first layer scans an image and does simple tasks like searching for edges and color changes. The second layer examines the output from the first layer and searches for more complex patterns. Mathematically, this nested method of looking for patterns within patterns within patterns is referred to as a nonlinear process.

"It's essentially a very simple visual cortex," Patel said of the convolutional neural net. "You give it an image, and each layer processes the image a little bit more and understands it in a deeper way, and by the last layer, you've got a really deep and abstract understanding of the image. Every self-driving car right now has convolutional neural nets in it because they are currently the best for vision."

Like human brains, neural networks start out as blank slates and become fully formed as they interact with the world. For example, each processing unit in a convolutional net starts the same and becomes specialized over time as they are exposed to visual stimuli.

"Edges are very important," Nguyen said. "Many of the lower layer neurons tend to become edge detectors. They're looking for patterns that are both very common and very important for visual interpretation, and each one trains itself to look for a specific pattern, like a 45-degree edge or a 30-degree red-to-blue transition.

"When they detect their particular pattern, they become excited and pass that on to the next layer up, which looks for patterns in their patterns, and so on," he said. "The number of times you do a nonlinear transformation is essentially the depth of the network, and depth governs power.

"The deeper a network is, the more stuff it's able to disentangle. At the deeper layers, units are looking for very abstract things like eyeballs or vertical grating patterns or a school bus."

Nguyen began working with Patel in January as the latter began his tenure-track academic career at Rice and Baylor. Patel had already spent more than a decade studying and applying machine learning in jobs ranging from high-volume commodities training to strategic missile defense, and he'd just wrapped up a four-year postdoctoral stint in the lab of Rice's Richard Baraniuk, another co-author on the new study.

In late 2015, Baraniuk, Patel and Nguyen published the first theoretical framework that could both derive the exact structure of convolutional neural networks and provide principled solutions to alleviate some of their limitations.

Baraniuk said a solid theoretical understanding is vital for designing convolutional nets that go beyond today's state-of-the-art.

"Understanding video images is a great example," Baraniuk said. "If I am looking at a video, frame by frame by frame, and I want to understand all the objects and how they're moving and so on, that is a huge challenge.

"Imagine how long it would take to label every object in every frame of a video. No one has time for that. And in order for a machine to understand what it's seeing in a video, it has to understand what objects are, the concept of three-dimensional space and a whole bunch of other really complicated stuff.

"We humans learn those things on our own and take them for granted, but they are totally missing in today's artificial neural networks."

Patel said the theory of artificial neural networks, which was refined in the NIPS paper, could ultimately help neuroscientists better understand the workings of the human brain.

"There seem to be some similarities about how the visual cortex represents the world and how convolutional nets represent the world, but they also differ greatly," Patel said. "What the brain is doing may be related, but it's still very different. And the key thing we know about the brain is that it mostly learns unsupervised.

"What I and my neuroscientist colleagues are trying to figure out is, What is the semisupervised learning algorithm that's being implemented by the neural circuits in the visual cortex? and How is that related to our theory of deep learning?" he said. "Can we use our theory to help elucidate what the brain is doing? Because the way the brain is doing it is far superior to any neural network that we've designed."


Comment on this article using your Disqus, Facebook, Google or Twitter login.


Thanks for being here;
We need your help. The SpaceDaily news network continues to grow but revenues have never been harder to maintain.

With the rise of Ad Blockers, and Facebook - our traditional revenue sources via quality network advertising continues to decline. And unlike so many other news sites, we don't have a paywall - with those annoying usernames and passwords.

Our news coverage takes time and effort to publish 365 days a year.

If you find our news sites informative and useful then please consider becoming a regular supporter or for now make a one off contribution.
SpaceDaily Contributor
$5 Billed Once


credit card or paypal
SpaceDaily Monthly Supporter
$5 Billed Monthly


paypal only


.


Related Links
Rice University
Space Technology News - Applications and Research






Share this article via these popular social media networks
del.icio.usdel.icio.us DiggDigg RedditReddit GoogleGoogle

Previous Report
TECH SPACE
New technology of ultrahigh density optical storage researched at Kazan University
Kazan, Russia (SPX) Dec 02, 2016
According to current estimates, dozens of zettabytes of information will need to be placed somewhere by 2020. New physical principles must be found, the ones that facilitate the use of single atoms or molecules as basic memory cells. This can be done with the help of lasers. However, the existing methods of optical storage are limited to the diffraction limit (~500 nm), so the respective recordi ... read more


TECH SPACE
Spacewalk for Thomas Pesquet at ISS

NASA's Exo-Brake 'Parachute' to Enable Safe Return for Small Spacecraft

Trump sits down with tech execs, including critics

Trump sits down with tech execs, including critics

TECH SPACE
United Launch Alliance launches EchoStar XIX satellite

NASA Engineers Test Combustion Chamber to Advance 3-D Printed Rocket Engine Design

Ultra-Cold Storage - Liquid Hydrogen may be Fuel of the Future

Technical glitch postpones NASA satellite launch

TECH SPACE
All eyes on Trump over Mars

Opportunity performs several drives to ancient gully

Full go-ahead for building ExoMars 2020

Skimming an alien atmosphere

TECH SPACE
Chinese missile giant seeks 20% of a satellite market

China-made satellites in high demand

Space exploration plans unveiled

China launches 4th data relay satellite

TECH SPACE
OneWeb announces key funding form SoftBank Group and other investors

Space as a Driver for Socio-Economic Sustainable Development

SoftBank delivers first $1 bn of Trump pledge, to space firm

Telecom satellite system to encircle globe

TECH SPACE
Uncovering the secrets of water and ice as materials

The hidden side of sulfur

Chemical trickery corrals 'hyperactive' metal-oxide cluster

High Resolution Imaging of Hypervelocity Impacts

TECH SPACE
Astronomers discover dark past of planet-eating 'Death Star'

Microlensing Study Suggests Most Common Outer Planets Likely Neptune-mass

Are planets like those in 'Star Wars

Exciting new creatures discovered on ocean floor

TECH SPACE
Juno Captures Jupiter 'Pearl'

Juno Mission Prepares for December 11 Jupiter Flyby

Research Offers Clues About the Timing of Jupiter's Formation

New Perspective on How Pluto's "Icy Heart" Came to Be









The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.