. 24/7 Space News .
TECH SPACE
Unpacking black-box models
by Adam Zewe for MIT News
Boston MA (SPX) May 08, 2022

Researchers create a mathematical framework to evaluate explanations of machine-learning models and quantify how well people understand them.

Modern machine-learning models, such as neural networks, are often referred to as "black boxes" because they are so complex that even the researchers who design them can't fully understand how they make predictions.

To provide some insights, researchers use explanation methods that seek to describe individual model decisions. For example, they may highlight words in a movie review that influenced the model's decision that the review was positive.

But these explanation methods don't do any good if humans can't easily understand them, or even misunderstand them. So, MIT researchers created a mathematical framework to formally quantify and evaluate the understandability of explanations for machine-learning models. This can help pinpoint insights about model behavior that might be missed if the researcher is only evaluating a handful of individual explanations to try to understand the entire model.

"With this framework, we can have a very clear picture of not only what we know about the model from these local explanations, but more importantly what we don't know about it," says Yilun Zhou, an electrical engineering and computer science graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of a paper presenting this framework.

Zhou's co-authors include Marco Tulio Ribeiro, a senior researcher at Microsoft Research, and senior author Julie Shah, a professor of aeronautics and astronautics and the director of the Interactive Robotics Group in CSAIL. The research will be presented at the Conference of the North American Chapter of the Association for Computational Linguistics.

Understanding local explanations
One way to understand a machine-learning model is to find another model that mimics its predictions but uses transparent reasoning patterns. However, recent neural network models are so complex that this technique usually fails. Instead, researchers resort to using local explanations that focus on individual inputs. Often, these explanations highlight words in the text to signify their importance to one prediction made by the model.

Implicitly, people then generalize these local explanations to overall model behavior. Someone may see that a local explanation method highlighted positive words (like "memorable," "flawless," or "charming") as being the most influential when the model decided a movie review had a positive sentiment. They are then likely to assume that all positive words make positive contributions to a model's predictions, but that might not always be the case, Zhou says.

The researchers developed a framework, known as ExSum (short for explanation summary), that formalizes those types of claims into rules that can be tested using quantifiable metrics. ExSum evaluates a rule on an entire dataset, rather than just the single instance for which it is constructed.

Using a graphical user interface, an individual writes rules that can then be tweaked, tuned, and evaluated. For example, when studying a model that learns to classify movie reviews as positive or negative, one might write a rule that says "negation words have negative saliency," which means that words like "not," "no," and "nothing" contribute negatively to the sentiment of movie reviews.

Using ExSum, the user can see if that rule holds up using three specific metrics: coverage, validity, and sharpness. Coverage measures how broadly applicable the rule is across the entire dataset. Validity highlights the percentage of individual examples that agree with the rule. Sharpness describes how precise the rule is; a highly valid rule could be so generic that it isn't useful for understanding the model.

Testing assumptions
If a researcher seeks a deeper understanding of how her model is behaving, she can use ExSum to test specific assumptions, Zhou says.

If she suspects her model is discriminative in terms of gender, she could create rules to say that male pronouns have a positive contribution and female pronouns have a negative contribution. If these rules have high validity, it means they are true overall and the model is likely biased.

ExSum can also reveal unexpected information about a model's behavior. For example, when evaluating the movie review classifier, the researchers were surprised to find that negative words tend to have more pointed and sharper contributions to the model's decisions than positive words. This could be due to review writers trying to be polite and less blunt when criticizing a film, Zhou explains.

"To really confirm your understanding, you need to evaluate these claims much more rigorously on a lot of instances. This kind of understanding at this fine-grained level, to the best of our knowledge, has never been uncovered in previous works," he says.

"Going from local explanations to global understanding was a big gap in the literature. ExSum is a good first step at filling that gap," adds Ribeiro.

Extending the framework
In the future, Zhou hopes to build upon this work by extending the notion of understandability to other criteria and explanation forms, like counterfactual explanations (which indicate how to modify an input to change the model prediction). For now, they focused on feature attribution methods, which describe the individual features a model used to make a decision (like the words in a movie review).

In addition, he wants to further enhance the framework and user interface so people can create rules faster. Writing rules can require hours of human involvement - and some level of human involvement is crucial because humans must ultimately be able to grasp the explanations - but AI assistance could streamline the process.

As he ponders the future of ExSum, Zhou hopes their work highlights a need to shift the way researchers think about machine-learning model explanations.

"Before this work, if you have a correct local explanation, you are done. You have achieved the holy grail of explaining your model. We are proposing this additional dimension of making sure these explanations are understandable. Understandability needs to be another metric for evaluating our explanations," says Zhou.

Research Report:"ExSum: The Explanation Summary Framework for Deriving Generalized Model Understandings from Local Explanations"


Related Links
Computer Science and Artificial Intelligence Laboratory (CSAIL)
Space Technology News - Applications and Research


Thanks for being there;
We need your help. The SpaceDaily news network continues to grow but revenues have never been harder to maintain.

With the rise of Ad Blockers, and Facebook - our traditional revenue sources via quality network advertising continues to decline. And unlike so many other news sites, we don't have a paywall - with those annoying usernames and passwords.

Our news coverage takes time and effort to publish 365 days a year.

If you find our news sites informative and useful then please consider becoming a regular supporter or for now make a one off contribution.
SpaceDaily Monthly Supporter
$5+ Billed Monthly


paypal only
SpaceDaily Contributor
$5 Billed Once


credit card or paypal


TECH SPACE
In Scandinavia, wooden buildings reach new heights
Skelleftea, Sweden (AFP) April 30, 2022
A sandy-coloured tower glints in the sunlight and dominates the skyline of the Swedish town of Skelleftea as Scandinavia harnesses its wood resources to lead a global trend towards erecting eco-friendly high-rises. The Sara Cultural Centre is one of the world's tallest timber buildings, made primarily from spruce and towering 75 metres (246 feet) over rows of snow-dusted houses and surrounding forest. The 20-storey timber structure, which houses a hotel, a library, an exhibition hall and theatre ... read more

Comment using your Disqus, Facebook, Google or Twitter login.



Share this article via these popular social media networks
del.icio.usdel.icio.us DiggDigg RedditReddit GoogleGoogle

TECH SPACE
Relations on ISS not changed following Russia's Invasion of Ukraine

Ariel Ekblaw on building beautiful architecture in space

ISS Partnership faces 'Administrative Difficulties' NASA Panel Says

Students compete to improve everyday life on the Space Station

TECH SPACE
Boeing reportedly butting heads with supplier over Starliner issues

Work continues to return Artemis I Moon rocket back to launch pad for next test

Launch of China's commercial carrier rocket fails

FAA issues Commercial Space Reentry Site Operator License for Huntsville Airport

TECH SPACE
A Different Perspective on Mirador Butte Sols 3473-3475

New study indicates limited water circulation late in the history of Mars

Study reveals new way to reconstruct past climate on Mars

Sols 3471-3472: Up The Mountain We Go!

TECH SPACE
China's cargo craft docks with space station combination

China launches the Tianzhou 4 cargo spacecraft

China prepares to launch Tianzhou-4 cargo spacecraft

China launches Jilin-1 commercial satellites

TECH SPACE
Kepler provides on-orbit high-capacity data service to Spire Global

Terran Orbital ships CENTAURI-5 satellite to Cape Canaveral

NASA selects SES Government Solutions to support Near-Earth communications

Rocket Lab launches BRO-6 for Unseenlabs

TECH SPACE
Smarter satellites: ESA Discovery accelerates AI in space

Unpacking black-box models

Researchers develop 3D-printed shape memory alloy with superior superelasticity

Failed eruptions are at the origin of copper deposits

TECH SPACE
Researchers reveal the origin story for carbon-12, a building block for life

The origin of life: A paradigm shift

Planet-forming disks evolve in surprisingly similar ways

Experiments measure freezing point of extraterrestrial oceans to aid search for life

TECH SPACE
Traveling to the centre of planet Uranus

Juno captures moon shadow on Jupiter

Greenland Ice, Jupiter Moon Share Similar Feature

Search for life on Jupiter moon Europa bolstered by new study









The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.