24/7 Space News
New method accelerates data retrieval in huge databases
Researchers from MIT and elsewhere set out to see if they could use machine learning to build better hash functions.
New method accelerates data retrieval in huge databases
by Adam Zewe for MIT News
Boston MA (SPX) Mar 14, 2023

Hashing is a core operation in most online databases, like a library catalogue or an e-commerce website. A hash function generates codes that directly determine the location where data would be stored. So, using these codes, it is easier to find and retrieve the data.

However, because traditional hash functions generate codes randomly, sometimes two pieces of data can be hashed with the same value. This causes collisions - when searching for one item points a user to many pieces of data with the same hash value. It takes much longer to find the right one, resulting in slower searches and reduced performance.

Certain types of hash functions, known as perfect hash functions, are designed to place the data in a way that prevents collisions. But they are time-consuming to construct for each dataset and take more time to compute than traditional hash functions.

Since hashing is used in so many applications, from database indexing to data compression to cryptography, fast and efficient hash functions are critical. So, researchers from MIT and elsewhere set out to see if they could use machine learning to build better hash functions.

They found that, in certain situations, using learned models instead of traditional hash functions could result in half as many collisions. These learned models are created by running a machine-learning algorithm on a dataset to capture specific characteristics. The team's experiments also showed that learned models were often more computationally efficient than perfect hash functions.

"What we found in this work is that in some situations we can come up with a better tradeoff between the computation of the hash function and the collisions we will face. In these situations, the computation time for the hash function can be increased a bit, but at the same time its collisions can be reduced very significantly," says Ibrahim Sabek, a postdoc in the MIT Data Systems Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Their research, which will be presented at the 2023 International Conference on Very Large Databases, demonstrates how a hash function can be designed to significantly speed up searches in a huge database. For instance, their technique could accelerate computational systems that scientists use to store and analyze DNA, amino acid sequences, or other biological information.

Sabek is the co-lead author of the paper with Department of Electrical Engineering and Computer Science (EECS) graduate student Kapil Vaidya. They are joined by co-authors Dominick Horn, a graduate student at the Technical University of Munich; Andreas Kipf, an MIT postdoc; Michael Mitzenmacher, professor of computer science at the Harvard John A. Paulson School of Engineering and Applied Sciences; and senior author Tim Kraska, associate professor of EECS at MIT and co-director of the Data, Systems, and AI Lab.

Hashing it out
Given a data input, or key, a traditional hash function generates a random number, or code, that corresponds to the slot where that key will be stored. To use a simple example, if there are 10 keys to be put into 10 slots, the function would generate a random integer between 1 and 10 for each input. It is highly probable that two keys will end up in the same slot, causing collisions.

Perfect hash functions provide a collision-free alternative. Researchers give the function some extra knowledge, such as the number of slots the data are to be placed into. Then it can perform additional computations to figure out where to put each key to avoid collisions. However, these added computations make the function harder to create and less efficient.

"We were wondering, if we know more about the data - that it will come from a particular distribution - can we use learned models to build a hash function that can actually reduce collisions?" Vaidya says.

A data distribution shows all possible values in a dataset, and how often each value occurs. The distribution can be used to calculate the probability that a particular value is in a data sample.

The researchers took a small sample from a dataset and used machine learning to approximate the shape of the data's distribution, or how the data are spread out. The learned model then uses the approximation to predict the location of a key in the dataset.

They found that learned models were easier to build and faster to run than perfect hash functions and that they led to fewer collisions than traditional hash functions if data are distributed in a predictable way. But if the data are not predictably distributed because gaps between data points vary too widely, using learned models might cause more collisions.

"We may have a huge number of data inputs, and the gaps between consecutive inputs are very different, so learning a model to capture the data distribution of these inputs is quite difficult," Sabek explains.

Fewer collisions, faster results
When data were predictably distributed, learned models could reduce the ratio of colliding keys in a dataset from 30 percent to 15 percent, compared with traditional hash functions. They were also able to achieve better throughput than perfect hash functions. In the best cases, learned models reduced the runtime by nearly 30 percent.

As they explored the use of learned models for hashing, the researchers also found that throughput was impacted most by the number of sub-models. Each learned model is composed of smaller linear models that approximate the data distribution for different parts of the data. With more sub-models, the learned model produces a more accurate approximation, but it takes more time.

"At a certain threshold of sub-models, you get enough information to build the approximation that you need for the hash function. But after that, it won't lead to more improvement in collision reduction," Sabek says.

Building off this analysis, the researchers want to use learned models to design hash functions for other types of data. They also plan to explore learned hashing for databases in which data can be inserted or deleted. When data are updated in this way, the model needs to change accordingly, but changing the model while maintaining accuracy is a difficult problem.

"We want to encourage the community to use machine learning inside more fundamental data structures and algorithms. Any kind of core data structure presents us with an opportunity to use machine learning to capture data properties and get better performance. There is still a lot we can explore," Sabek says.

"Hashing and indexing functions are core to a lot of database functionality. Given the variety of users and use cases, there is no one size fits all hashing, and learned models help adapt the database to a specific user. This paper is a great balanced analysis of the feasibility of these new techniques and does a good job of talking rigorously about the pros and cons, and helps us build our understanding of when such methods can be expected to work well," says Murali Narayanaswamy, a principal machine learning scientist at Amazon, who was not involved with this work. "Exploring these kinds of enhancements is an exciting area of research both in academia and industry, and the kind of rigor shown in this work is critical for these methods to have large impact."

Research Report:"Can Learned Models Replace Hash Functions?"

Related Links
Data Systems and AI Lab
Space Technology News - Applications and Research

Subscribe Free To Our Daily Newsletters

The following news reports may link to other Space Media Network websites.
Globalstar introduces Realm Cloud Mobile Device Management Platform
Covington LA (SPX) Mar 13, 2023
Globalstar, Inc. (NYSE American: GSAT) has unveiled Realm Cloud, an agile mobile device and data management (MDM) platform designed to enable VARs and customers with all the capabilities required to solve speed-to-market needs for deploying and building customized asset tracking and telematics solutions while lowering the cost of ownership. Designed for an ever-evolving future, Realm Cloud offers easy-to-use device management capabilities, custom dashboards and API integrations with increased func ... read more

SpaceX cargo resupply mission CRS-27 scheduled for launch Tuesday

NASA SpaceX Crew-5 splashes down after 5-month mission

China to revamp science, tech in face of foreign 'suppression'

DLR goes all in with new technology at the Startup Factory

SpaceX launches Cargo Dragon carrying supplies and experiments to ISS

Private firm to launch maiden rocket flight in Spain

Launch of Relativity Space's 3D-printed rocket aborted

Launch of world's first 3D-printed rocket canceled at last second

Building on Luna and Mars with StarCrete the double stength concrete

ExoMars: Back on track for the Red Planet

Taking turns with Tapo Caparo: Sols 3766-37368

Don't Dream and Drive: Sols 3764-3765

Shenzhou XV crew takes second spacewalk

China conducts ignition test in Mengtian space lab module

China plans robotic spacecraft to collect samples from asteroid

China's space station experiments pave way for new space technology

Eutelsat and Intelsat sign multi-orbit contract enhancing connectivity with OneWeb

SpaceX launches 40 OneWeb internet satellites, lands booster

SatixFy and Kythera Space solutions partner to deliver advanced payload solutions for LEO constellations

Australian astronaut candidate to receive basic training with ESA

MIT 3D-printed revolving devices can sense how they are moving

Costa Rica's 'urban mine' for planet-friendlier lithium

New method accelerates data retrieval in huge databases

Experiment unlocks bizarre properties of strange metals

Distant star TOI-700 has two potentially habitable planets

How do microbes live off light

DLR Gottingen helps in the search for signs of life in space

Life in the smoke of underwater volcanoes

Inspiring mocktail menu served up by Space Juice winners

First the Moon, now Jupiter

Newly discovered form of salty ice could exist on surface of extraterrestrial moons

New aurorae detected on Jupiter's four largest moons

Subscribe Free To Our Daily Newsletters

The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.