| . | ![]() |
. |
|
by Staff Writers Bielefeld, Germany (SPX) Jun 24, 2022
Currently, a key question in AI research is how to arrive at comprehensible explanations of underlying machine processes: Should humans be able to explain how machines work, or should machines learn to explain themselves? 'The double-meaning of the name of our conference, "Explaining Machines," expresses these various possibilities: machines explaining themselves, humans explaining machines - or maybe both at the same time,' says Professor Dr. Elena Esposito. The Bielefeld sociologist is heading a subproject at TRR 318 and is organising the conference together with her colleague Professor Dr. Tobias Matzner from Paderborn University. Dr. Matzer is a media studies researcher who is also heading a Transregio subproject. 'If explanations from machines are to have an impact socially and politically, it's not enough that explanations are comprehensible to computer scientists,' says Matzner. 'Different socially situated people must be included in explanatory processes - from doctors to retirees and schoolchildren.'
The technical and social challenges of AI projects Previously, the 'explanability' of artificial intelligence was largely the domain of computer scientists. 'In this research approach, the main view is that explainability and comprehension arise from transparency - that is, having as much information available as possible. An alternative view to this is that of co-construction,' says Professor Dr. Katharina Rohlfing, a linguist at Paderborn University and the spokesperson of Transregio. 'In our research, we do not consider humans to be passive partners who simply receive explanations. Instead, explanations emerge at the interface between the explainer and the explainee. Both actively shape the process of explanation and work towards achieving agreement on common ideas and conceptions. Cross-disciplinary collaboration is therefore essential to the study of explainability.'
Conference talks from media, philosophy, law, and sociology The conference 'Explaining Macines' is Transregio 318's first major scientific event. Information on the program can be found here. In the next three funding years, further conferences are planned that will focus on the concept of explainability from the perspectives of different scientific disciplines. Members of the press are welcome to report on the conference: registration is required in advance by sending an email to [email protected]. The conference organizers and the Transregio spokesperson will be available during the conference to answer any questions from the press.
Collaborative Research Centre/Transregio (TRR) 318
AI Improves Robotic Performance in DARPA's Machine Common Sense Program Washington DC (SPX) Jun 23, 2022 Researchers with DARPA's Machine Common Sense (MCS) program demonstrated a series of improvements to robotic system performance over the course of multiple experiments. Just as infants must learn from experience, MCS seeks to construct computational models that mimic the core domains of child cognition for objects (intuitive physics), agents (intentional actors), and places (spatial navigation). Using only simulated training, recent MCS experiments demonstrated advancements in systems' abilities - ... read more
|
|||||||||||||
| The content herein, unless otherwise known to be public domain, are Copyright 1995-2026 - SpaceDaily. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us. |