Subscribe free to our newsletters via your
. 24/7 Space News .




CHIP TECH
Parallelizing common algorithms
by Larry Hardesty, MIT News Office
Boston MA (SPX) Feb 05, 2015


Image courtesy Christine Daniloff/MIT.

Every undergraduate computer-science major takes a course on data structures, which describes different ways of organizing data in a computer's memory. Every data structure has its own advantages: Some are good for fast retrieval, some for efficient search, some for quick insertions and deletions, and so on.

Today, hardware manufacturers are making computer chips faster by giving them more cores, or processing units. But while some data structures are well adapted to multicore computing, others are not. In principle, doubling the number of cores should double the efficiency of a computation. With algorithms that use a common data structure called a priority queue, that's been true for up to about eight cores -- but adding any more cores actually causes performance to plummet.

At the Association for Computing Machinery's Symposium on Principles and Practice of Parallel Programming in February, researchers from MIT's Computer Science and Artificial Intelligence Laboratory will describe a new way of implementing priority queues that lets them keep pace with the addition of new cores. In simulations, algorithms using their data structure continued to demonstrate performance improvement with the addition of new cores, up to a total of 80 cores.

A priority queue is a data structure that, as its name might suggest, sequences data items according to priorities assigned them when they're stored. At any given time, only the item at the front of the queue -- the highest-priority item -- can be retrieved. Priority queues are central to the standard algorithms for finding the shortest path across a network and for simulating events, and they've been used for a host of other applications, from data compression to network scheduling.

With multicore systems, however, conflicts arise when multiple cores try to access the front of a priority queue at the same time. The problem is compounded by modern chips' reliance on caches -- high-speed memory banks where cores store local copies of frequently used data.

"As you're reading the front of the queue, the whole front of the queue will be in your cache," says Justin Kopinsky, an MIT graduate student in electrical engineering and computer science and one of the new paper's co-authors.

"All of these guys try to put the first element in their cache and then do a bunch of stuff with it, but then somebody writes to it, and it invalidates everybody else's cache. And this is like an order-of-magnitude slowdown -- maybe multiple orders of magnitude."

Loosening up
To avoid this problem, Kopinsky; fellow graduate student Jerry Li; their advisor, professor of computer science and engineering Nir Shavit; and Microsoft Research's Dan Alistarh, a former student of Shavit's, relaxed the requirement that each core has to access the first item in the queue. If the items at the front of the queue can be processed in parallel -- which must be the case for multicore computing to work, anyway -- they can simply be assigned to cores at random.

But a core has to know where to find the data item it's been assigned, which is harder than it sounds. Data structures generally trade ease of insertion and deletion for ease of addressability. You could, for instance, assign every position in a queue its own memory address: To find the fifth item, you would simply go to the fifth address.

But then, if you wanted to insert a new item between, say, items four and five, you'd have to copy the last item in the queue into the first empty address, then copy the second-to-last item into the address you just vacated, and so on, until you'd vacated address five. Priority queues are constantly being updated, so this approach is woefully impractical.

An alternative is to use what's known as a linked list. Each element of a linked list consists of a data item and a "pointer" to the memory address of the next element. Inserting a new element between elements four and five is then just a matter of updating two pointers.

Road less traveled
The only way to find a particular item in a linked list, however, is to start with item one and follow the ensuing sequence of pointers. This is a problem if multiple cores are trying to modify data items simultaneously. Say that a core has been assigned element five. It goes to the head of the list and starts working its way down. But another core is already in the process of modifying element three, so the first core has to sit and wait until it's done.

The MIT researchers break this type of logjam by repurposing yet another data structure, called a skip list. The skip list begins with a linked list and builds a hierarchy of linked lists on top of it. Only, say, half the elements in the root list are included in the list one layer up the hierarchy. Only half the elements in the second layer are included in the third, and so on.

The skip list was designed to make moving through a linked list more efficient. To find a given item in the root list, you follow the pointers through the top list until you identify the gap into which it falls, then move down one layer and repeat the process.

But the MIT researchers' algorithm starts farther down the hierarchy; how far down depends on how many cores are trying to access the root list. Each core then moves some random number of steps and jumps down to the next layer of the hierarchy. It repeats the process until it reaches the root list. Collisions can still happen, particularly when a core is modifying a data item that appears at multiple levels of the hierarchy, but they become much rarer.


Thanks for being here;
We need your help. The SpaceDaily news network continues to grow but revenues have never been harder to maintain.

With the rise of Ad Blockers, and Facebook - our traditional revenue sources via quality network advertising continues to decline. And unlike so many other news sites, we don't have a paywall - with those annoying usernames and passwords.

Our news coverage takes time and effort to publish 365 days a year.

If you find our news sites informative and useful then please consider becoming a regular supporter or for now make a one off contribution.
SpaceDaily Contributor
$5 Billed Once


credit card or paypal
SpaceDaily Monthly Supporter
$5 Billed Monthly


paypal only


.


Related Links
Massachusetts Institute of Technology
Computer Chip Architecture, Technology and Manufacture
Nano Technology News From SpaceMart.com






Comment on this article via your Facebook, Yahoo, AOL, Hotmail login.

Share this article via these popular social media networks
del.icio.usdel.icio.us DiggDigg RedditReddit GoogleGoogle




Memory Foam Mattress Review
Newsletters :: SpaceDaily :: SpaceWar :: TerraDaily :: Energy Daily
XML Feeds :: Space News :: Earth News :: War News :: Solar Energy News





CHIP TECH
Researchers use oxides to flip graphene conductivity
University Park PA (SPX) Jan 28, 2015
Graphene, a one-atom thick lattice of carbon atoms, is often touted as a revolutionary material that will take the place of silicon at the heart of electronics. The unmatched speed at which it can move electrons, plus its essentially two-dimensional form factor, make it an attractive alternative, but several hurdles to its adoption remain. A team of researchers from the University of Penns ... read more


CHIP TECH
LRO finds lunar hydrogen more abundant on Moon's pole-facing slopes

Service Module of Chinese Probe Enters Lunar Orbit

Service module of China's lunar orbiter enters 127-minute orbit

Chinese spacecraft to return to moon's orbit

CHIP TECH
Mars Orbiter Spies Curiosity Rover at Work

Meteorite may represent 'bulk background' of Mars' battered crust

Gully patterns document Martian climate cycles

The two faces of Mars

CHIP TECH
NASA gets $18.5 billion in White House budget proposal

NASA hails spending boost under Obama budget proposal

Mini Models Fire Up for SLS Base Heating Tests

How spaceflight ages the immune system prematurely

CHIP TECH
More Astronauts for China

China launches the FY-2 08 meteorological satellite successfully

China's Long March puts satellite in orbit on 200th launch

Countdown to China's new space programs begins

CHIP TECH
The Strange Way Fluids Slosh on the International Space Station

NASA's CATS Installed on ISS by Robotic Handoff

Roscosmos, NASA Still Planning on Sending Men Into Space

Russian Cargo Spacecraft to Supply ISS With Black Caviar

CHIP TECH
Sea Launch considers replacement of Zenit-3SL rockets

Moog Supports Delta 2 of SMAP satellite

Iran Launches Satellite Into Space, First Since 2012

Soyuz Installed at Baikonur, Expected to Launch Wednesday

CHIP TECH
"Vulcan Planets" - Inside-Out Formation of Super-Earths

Dawn ahead!

Habitable Evaporated Cores

Smaller Gas Giants Could Support Life

CHIP TECH
SMAP Satellite utilizes Northrop Grumman's AstroMesh Reflector

The rarely understood ammonium carbonate monohydrate

Fajr satellite serves no military purpose

Eyes In The Sky: Britain's GCHQ Sets Sights on Space




The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement All images and articles appearing on Space Media Network have been edited or digitally altered in some way. Any requests to remove copyright material will be acted upon in a timely and appropriate manner. Any attempt to extort money from Space Media Network will be ignored and reported to Australian Law Enforcement Agencies as a potential case of financial fraud involving the use of a telephonic carriage device or postal service.