24/7 Space News
TECH SPACE
Shrinking AI memory improves LLM accuracy
illustration only

Shrinking AI memory improves LLM accuracy

by Sophie Jenkins
London, UK (SPX) Dec 26, 2025

Researchers have developed a new way to compress the memory used by AI models to increase their accuracy in complex tasks or reduce the energy needed to run them.

Experts from the University of Edinburgh and NVIDIA found that large language models using memory eight times smaller than an uncompressed system scored better on maths, science, and coding tests while spending the same amount of time reasoning. The method can also be configured so that models respond to more user queries simultaneously, lowering the power required per task.

The approach focuses on the models' key-value cache, or KV cache, which stores segments of step-by-step reasoning sequences known as reasoning threads. As models generate more threads or extend them, the KV cache grows and becomes slower to retrieve, creating a bottleneck during inference when the system answers prompts.

To address this, the team developed Dynamic Memory Sparsification (DMS), a technique that compresses the KV cache by deciding which tokens to retain and which to delete. Instead of keeping every token, DMS selects those judged most important so the model keeps useful context while reducing memory use.

There is a short delay between deciding to delete tokens and actually removing them, which gives the model time to transfer valuable information from tokens that will be evicted into those that remain. By managing token eviction in this way, DMS allows the AI model to explore more possible solutions or reason in greater depth without extra compute.

The researchers tested DMS on different versions of the Llama and Qwen model families and compared their performance with non-compressed baselines. Even when memory was compressed to one eighth of its original size, large language models maintained their accuracy on difficult tasks and produced results faster than non-compressed systems.

In the AIME 24 mathematics test, which serves as a qualifier for the United States Mathematical Olympiad, compressed models performed twelve points better on average while using the same number of KV cache reads per answer. On GPQA Diamond, a set of complex questions in biology, chemistry, and physics authored by PhD-level experts, the compressed models scored more than eight points higher.

The models were also evaluated with LiveCode Bench, which measures how well AI systems write code. In these tests, compressed models scored about ten points better on average than non-compressed models, indicating that KV cache compression can preserve and enhance reasoning quality while operating with much smaller memory budgets.

The findings were peer reviewed and presented at the NeurIPS 2025 conference. The paper, titled "Inference-Time Hyper-Scaling with KV Cache Compression," is available at https://openreview.net/pdf?id=8ZiElzQxf1.

Dr Edoardo Ponti, GAIL Fellow and Lecturer in Natural Language Processing at the University's School of Informatics, said: "In a nutshell, our models can reason faster but with the same quality. Hence, for an equivalent time budget for reasoning, they can explore more and longer reasoning threads. This improves their ability to solve complex problems in maths, science, and coding."

Dr Ponti and his team will continue to study how large AI systems represent and remember information as part of a 1.5 million euros European Research Council-funded project called AToM-FM, which aims to make such systems more efficient and sustainable.

Research Report:Inference-Time Hyper-Scaling with KV Cache Compression

Related Links
University of Edinburgh
Space Technology News - Applications and Research

Subscribe Free To Our Daily Newsletters
Tweet

RELATED CONTENT
The following news reports may link to other Space Media Network websites.
TECH SPACE
Life, Culture and AI: Why 'plagiarism' Is Our Default Operating System
Gerroa, Australia (SPX) Nov 28, 2025
As AI models are accused of stealing the world's creativity, a deeper view emerges: life, culture - and now machines - all run on ceaseless pattern-copying, much like an extreme, accelerated form of reading. The real novelty is that humans invented the idea of "Plagiarism" with a capital P. ... read more

TECH SPACE
ISS to change commanders before Soyuz crew leaves orbit

Lodestar Space wins SECP support to advance AI satellite awareness system

Micro nano robots aim to cut carbon buildup in closed life support systems

NASA extends ISS National Lab management contract through 2030

TECH SPACE
Hydrogen from ethanol reforming mapped as aviation fuel-cell pathway

Southern Launch to host INNOSPACE missions from South Australian spaceports

Rocket Lab completes first dedicated JAXA mission with Electron launch

Musk signals plan to launch IPO for SpaceX

TECH SPACE
GoMars model simulates Martian dust storms to improve mission safety

Maven stays silent after routine pass behind Mars

Ancient Martian brines left bromine rich fingerprints in jarosite minerals

NASA JPL Unveils Rover Operations Center for Moon, Mars Missions

TECH SPACE
Foreign satellites ride Kinetica 1 on new CAS Space mission

Experts at Hainan symposium call for stronger global space partnership

Triple Long March launches mark record day for Chinese space program

China prepares Qingzhou cargo ship for low cost resupply flights

TECH SPACE
K2 Space raises 250m to scale Mega class high power satellites

Beyond Gravity positions new modular satellite platform for European LEO missions

Private capital targets mission-critical software power and platforms in new space economy

Applied Aerospace and PCX create US flight and space hardware group

TECH SPACE
Modena team outlines staged roadmap to cut emissions from metal laser 3D printing

Light driven process prints biocompatible plastic electrodes

New quantum chemistry method to unlock secrets of advanced materials

Working to eliminate barriers to adopting nuclear energy

TECH SPACE
Ultra hot super Earth shows dense atmosphere over magma ocean

Hidden circumbinary giant planet emerges from decade old Gemini data

The bacteria that wont wake up found in spacecraft cleanrooms

RISTRETTO spectrograph cleared for Proxima b atmospheric hunt

TECH SPACE
Uranus and Neptune may be rock rich worlds

SwRI links Uranus radiation belt mystery to solar storm driven waves

Looking inside icy moons

Saturn moon mission planning shifts to flower constellation theory

Subscribe Free To Our Daily Newsletters




The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.