Subscribe free to our newsletters via your
. 24/7 Space News .

Subscribe free to our newsletters via your

Scientists teach machines to learn like humans
by Staff Writers
New York NY (SPX) Dec 18, 2015

A team of scientists has developed an algorithm that captures our learning abilities, enabling computers to recognize and draw simple visual concepts that are mostly indistinguishable from those created by humans.

A team of scientists has developed an algorithm that captures our learning abilities, enabling computers to recognize and draw simple visual concepts that are mostly indistinguishable from those created by humans. The work, which appears in the latest issue of the journal Science, marks a significant advance in the field - one that dramatically shortens the time it takes computers to 'learn' new concepts and broadens their application to more creative tasks.

"Our results show that by reverse engineering how people think about a problem, we can develop better algorithms," explains Brenden Lake, a Moore-Sloan Data Science Fellow at New York University and the paper's lead author. "Moreover, this work points to promising methods to narrow the gap for other machine learning tasks."

The paper's other authors were Ruslan Salakhutdinov, an assistant professor of Computer Science at the University of Toronto, and Joshua Tenenbaum, a professor at MIT in the Department of Brain and Cognitive Sciences and the Center for Brains, Minds and Machines.

When humans are exposed to a new concept - such as new piece of kitchen equipment, a new dance move, or a new letter in an unfamiliar alphabet - they often need only a few examples to understand its make-up and recognize new instances.

While machines can now replicate some pattern-recognition tasks previously done only by humans - ATMs reading the numbers written on a check, for instance - machines typically need to be given hundreds or thousands of examples to perform with similar accuracy.

"It has been very difficult to build machines that require as little data as humans when learning a new concept," observes Salakhutdinov. "Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science."

Salakhutdinov helped to launch recent interest in learning with 'deep neural networks,' in a paper published in Science almost 10 years ago with his doctoral advisor Geoffrey Hinton. Their algorithm learned the structure of 10 handwritten character concepts - the digits 0-9 - from 6,000 examples each, or a total of 60,000 training examples.

In the work appearing in Science this week, the researchers sought to shorten the learning process and make it more akin to the way humans acquire and apply new knowledge - i.e., learning from a small number of examples and performing a range of tasks, such as generating new examples of a concept or generating whole new concepts.

To do so, they developed a 'Bayesian Program Learning' (BPL) framework, where concepts are represented as simple computer programs. For instance, the letter 'A' is represented by computer code - resembling the work of a computer programmer - that generates examples of that letter when the code is run.

Yet no programmer is required during the learning process: the algorithm programs itself by constructing code to produce the letter it sees. Also, unlike standard computer programs that produce the same output every time they run, these probabilistic programs produce different outputs at each execution. This allows them to capture the way instances of a concept vary, such as the differences between how two people draw the letter 'A.'

While standard pattern recognition algorithms represent concepts as configurations of pixels or collections of features, the BPL approach learns "generative models" of processes in the world, making learning a matter of 'model building' or 'explaining' the data provided to the algorithm. In the case of writing and recognizing letters, BPL is designed to capture both the causal and compositional properties of real-world processes, allowing the algorithm to use data more efficiently.

The model also "learns to learn" by using knowledge from previous concepts to speed learning on new concepts - e.g., using knowledge of the Latin alphabet to learn letters in the Greek alphabet. The authors applied their model to over 1,600 types of handwritten characters in 50 of the world's writing systems, including Sanskrit, Tibetan, Gujarati, Glagolitic - and even invented characters such as those from the television series Futurama.

In addition to testing the algorithm's ability to recognize new instances of a concept, the authors asked both humans and computers to reproduce a series of handwritten characters after being shown a single example of each character, or in some cases, to create new characters in the style of those it had been shown.

The scientists then compared the outputs from both humans and machines through 'visual Turing tests.' Here, human judges were given paired examples of both the human and machine output, along with the original prompt, and asked to identify which of the symbols were produced by the computer.

While judges' correct responses varied across characters, for each visual Turing test, fewer than 25 percent of judges performed significantly better than chance in assessing whether a machine or a human produced a given set of symbols.

"Before they get to kindergarten, children learn to recognize new concepts from just a single example, and can even imagine new examples they haven't seen," notes Tenenbaum.

"I've wanted to build models of these remarkable abilities since my own doctoral work in the late nineties. We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts - even simple visual concepts such as handwritten characters - in ways that are hard to tell apart from humans."


Related Links
New York University
All about the robots on Earth and beyond!

Comment on this article via your Facebook, Yahoo, AOL, Hotmail login.

Share this article via these popular social media networks DiggDigg RedditReddit GoogleGoogle

Previous Report
Tech titans pledge $1 bn for artificial intelligence research
San Francisco (AFP) Dec 12, 2015
Several big-name Silicon Valley figures have pledged $1 billion to support a non-profit firm that on Friday said it would focus on the "positive human impact" of artificial intelligence. Backers of the OpenAI research group include Tesla and SpaceX entrepreneur Elon Musk, Y Combinator's Sam Altman, LinkedIn co-founder Reid Hoffman, and PayPal cofounder Peter Thiel. "It's hard to fathom h ... read more

XPRIZE verifies moon express launch contract, kicking off new space race

Gaia's sensors scan a lunar transit

SwRI scientists explain why moon rocks contain fewer volatiles than Earth's

All-female Russian crew starts Moon mission test

Opportunity on west rim of Endeavour Crater within Marathon Valley

Curiosity reaches sand dunes

NASA's Curiosity rover reaches Martian sand dunes

Mars Mission Team Addressing Vacuum Leak on Key Science Instrument

China drives global patent applications to new high

Australia seeks 'ideas boom' with tax breaks, visa boosts

A Year After Maiden Voyage, Orion Progress Continues

NASA's Work to Understand Climate: A Global Perspective

Agreement with Chinese Space Tech Lab Will Advance Exploration Goals

China launches new communication satellite

China's indigenous SatNav performing well after tests

China launches Yaogan-29 remote sensing satellite

First Briton to travel to ISS blasts off into space

Tim Peake begins six-month stay on Space Station

British astronaut swaps family Christmas for space mission

Three astronauts land back on Earth from space station

Soyuz receives the Galileo payload for its December 17 liftoff

Moscow Confirms Suspension of Russian-Ukrainian 'Dnepr' Rocket Launches

Japan to launch X-ray astronomy satellite after 2 months

Russia Puts Military Satellite Into Orbit on December 13

Hubble reveals diversity of exoplanet atmosphere

Mystery of missing exoplanet water solved

Student helps discover new planet, calculates frequency of Jupiter-like planets

What kinds of stars form rocky planets

Satellite's Last Days Improve Orbital Decay Predictions

Israel's Amos-5 Satellite Failure Caused by Power Supply Malfunction

Scientists create atomically thin boron

Turning rice farming waste into useful silica compounds

Memory Foam Mattress Review
Newsletters :: SpaceDaily :: SpaceWar :: TerraDaily :: Energy Daily
XML Feeds :: Space News :: Earth News :: War News :: Solar Energy News

The content herein, unless otherwise known to be public domain, are Copyright 1995-2016 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement All images and articles appearing on Space Media Network have been edited or digitally altered in some way. Any requests to remove copyright material will be acted upon in a timely and appropriate manner. Any attempt to extort money from Space Media Network will be ignored and reported to Australian Law Enforcement Agencies as a potential case of financial fraud involving the use of a telephonic carriage device or postal service.