by Brooks Hays
Washington (UPI) Jun 1, 2017
Computer scientists at Rice University have developed a new technique for minimizing the amount of computations required for deep learning.
The simplification technique is similar to methods commonly used to minimize the amount of math required for data analysis.
"This applies to any deep-learning architecture, and the technique scales sublinearly, which means that the larger the deep neural network to which this is applied, the more the savings in computations there will be," lead researcher Anshumali Shrivastava, an assistant professor of computer science, said in a news release.
Deep-learning networks hold tremendous potential in a variety of fields, from healthcare to communications. The networks are still cumbersome, requiring significant amounts of computing power.
Scientists regularly use a data-indexing method called "hashing" to slim-down large amounts of computation. Hashing converts larger numbers and computations into smaller datasets and then organizes the information into indexes.
"Our approach blends two techniques -- a clever variant of locality-sensitive hashing and sparse backpropagation -- to reduce computational requirements without significant loss of accuracy," said Rice graduate student Ryan Spring. "For example, in small-scale tests we found we could reduce computation by as much as 95 percent and still be within 1 percent of the accuracy obtained with standard approaches."
Deep learning relies on artificial neurons, mathematical functions that turn one or more received inputs into an output. Neurons in deep learning networks begin as blank slates but become more specialized over time as they learn from the inflow of data. As neurons become more specialized, they form a hierarchy of functions.
Low-level neurons perform simple functions, while higher-level neurons are responsible for more sophisticated computations. More layers can yield more sophisticated and powerful results. But more layers require more time, space and energy.
"With 'big data,' there are fundamental limits on resources like compute cycles, energy and memory. Our lab focuses on addressing those limitations," Shrivastava said.
Researchers believe their latest efforts will allow computer scientists to more efficiently deploy large deep learning networks.
"The savings increase with scale because we are exploiting the inherent sparsity in big data," Spring said. "For instance, let's say a deep net has a billion neurons. For any given input -- like a picture of a dog -- only a few of those will become excited. In data parlance, we refer to that as sparsity, and because of sparsity our method will save more as the network grows in size. So while we've shown a 95 percent savings with 1,000 neurons, the mathematics suggests we can save more than 99 percent with a billion neurons."
Shrivastava and Spring are scheduled to share their work at the SIGKDD Conference on Knowledge Discovery and Data Mining, to be held in August in Nova Scotia.
Washington (AFP) May 16, 2017
Researchers from Hewlett-Packard Enterprise on Tuesday unveiled what they claimed was a breakthrough in computing with a new machine capable of handling vast amounts of data at supercomputing speeds. The prototype named simple "the Machine" uses a new approach to computer architecture which the company says can be adapted for a range of Big Data applications, handling tasks at thousands of t ... read more
Space Technology News - Applications and Research
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2017 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|