![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
. | ![]() |
. |
![]() by Staff Writers Pittsburgh PA (SPX) Jun 18, 2020
For safety's sake, a self-driving car must accurately track the movement of pedestrians, bicycles and other vehicles around it. Training those tracking systems may now be more effective thanks to a new method developed at Carnegie Mellon University. Generally speaking, the more road and traffic data available for training tracking systems, the better the results. And the CMU researchers have found a way to unlock a mountain of autonomous driving data for this purpose. "Our method is much more robust than previous methods because we can train on much larger datasets," said Himangi Mittal, a research intern working with David Held, assistant professor in CMU's Robotics Institute. Most autonomous vehicles navigate primarily based on a sensor called a lidar, a laser device that generates 3D information about the world surrounding the car. This 3D information isn't images, but a cloud of points. One way the vehicle makes sense of this data is by using a technique known as scene flow. This involves calculating the speed and trajectory of each 3D point. Groups of points moving together are interpreted via scene flow as vehicles, pedestrians or other moving objects. In the past, state-of-the-art methods for training such a system have required the use of labeled datasets - sensor data that has been annotated to track each 3D point over time. Manually labeling these datasets is laborious and expensive, so, not surprisingly, little labeled data exists. As a result, scene flow training is instead often performed with simulated data, which is less effective, and then fine-tuned with the small amount of labeled real-world data that exists. Mittal, Held and robotics Ph.D. student Brian Okorn took a different approach, using unlabeled data to perform scene flow training. Because unlabeled data is relatively easy to generate by mounting a lidar on a car and driving around, there's no shortage of it. The key to their approach was to develop a way for the system to detect its own errors in scene flow. At each instant, the system tries to predict where each 3D point is going and how fast it's moving. In the next instant, it measures the distance between the point's predicted location and the actual location of the point nearest that predicted location. This distance forms one type of error to be minimized. The system then reverses the process, starting with the predicted point location and working backward to map back to where the point originated. At this point, it measures the distance between the predicted position and the actual origination point, and the resulting distance forms the second type of error. The system then works to correct those errors. "It turns out that to eliminate both of those errors, the system actually needs to learn to do the right thing, without ever being told what the right thing is," Held said. As convoluted as that might sound, Okorn found that it worked well. The researchers calculated that scene flow accuracy using a training set of synthetic data was only 25%. When the synthetic data was fine-tuned with a small amount of real-world labeled data, the accuracy increased to 31%. When they added a large amount of unlabeled data to train the system using their approach, scene flow accuracy jumped to 46%.
![]() ![]() Ex Audi exec nabbed in Croatia on 'dieselgate' warrant Zagreb (AFP) June 16, 2020 Former Audi and VW director Axel Eiser has been arrested in Croatia in connection with the "dieselgate" engine emission scandal, a police source said Tuesday. Croatian authorities detained Eiser on the basis of an international arrest warrant issued by the United States, which has indicted him on charges that stem from software designed to fool emissions tests. The scandal erupted in 2015 when Volkswagen, which owns Audi, acknowledged that 11 million vehicles had been sold that gave better resul ... read more
![]() |
|
The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us. |