. | . |
Want computers to see better in the real world? Train them in a virtual reality by Staff Writers Beijing, China (SPX) Apr 13, 2018
Scientists have developed a new way to improve how computers "see" and "understand" objects in the real world by training the computers' vision systems in a virtual environment. The research team published their findings in IEEE/CAA Journal of Autmatica Sinica, a joint publication of the IEEE and the Chinese Association of Automation. For computers to learn and accurately recognize objects, such as a building, a street, or humans, the machines must rely on processing huge amount of labeled data, in this case, images of objects with accurate annotations. A self-driving car, for instance, needs thousands of images of roads and cars to learn from. Datasets therefore play a crucial role in the training and testing of the computer vision systems. Using manually labeled training datasets, a computer vision system compares its current situation to known situations and takes the best action it can "think" of - whatever that happens to be. "However, collecting and annotating images from the real world is too demanding in terms of labor and money investments," wrote Kunfeng Wang, an associate professor at China's State Key Laboratory for Management and Control for Complex Systems, and the lead author on the paper. Wang says the goal of their research is to specifically tackle the problem that real-world image datasets are not sufficient for training and testing computers vision systems. To solve this issue, Wang and his colleagues created a dataset called ParallelEye. ParallelEye was virtually generated by using commercially available computer software, primarily the video game engine Unity3D. Using a map of Zhongguancun, one of the busiest urban areas in Beijing, China, as their reference, they recreated the urban setting virtually by adding various buildings, cars, and even different weather conditions. Then they placed a virtual "camera" on a virtual car. The car drove around the virtual Zhongguancun and created datasets that are representative of the real world. Through their "complete control" of the virtual environment, Wang's team was able to create extremely specific usable data for their object detecting system - a simulated autonomous vehicle. The results were impressive: a marked increase in performance on nearly every tested metric. By designing custom-made datasets, a greater variety of autonomous systems will be more practical to train. While their greatest performance increases came from incorporating ParallelEye datasets with real-world datasets, Wang's team has demonstrated that their method is capable of easily creating diverse sets of images. "Using the ParallelEye vision framework, massive and diversified images can be synthesized flexibly and this can help build more robust computer vision systems," says Wang. The research team's proposed approach can be applied to many visual computing scenarios, including visual surveillance, medical image processing, and biometrics. Next, the team will create an even larger set of virtual images, improve the realism of virtual images, and explore the utility of virtual images for other computer vision tasks. Wang says: "Our ultimate goal is to build a systematic theory of Parallel Vision, which is able to train, test, understand and optimize computer vision models with virtual images and make the models work well in complex scenes."
Russia's Robot FEDOR to Be the First to Fly to Space on Board New Spacecraft Moscow (Sputnik) Apr 04, 2018 The new Russian manned spacecraft Federatsiya (Federation) is designed to deliver people and cargo to low earth orbit, as well as to the moon. The first such spaceship is expected to be commissioned by 2021. It will be a modernized version of the humanoid robot FEDOR (Final Experimental Demonstration Object Research) that is expected to be the first to fly to outer space on board the state-of-the-art manned spacecraft Federatsiya (Federation), Yevgeny Dudorov, technical director of the Russian rob ... read more
|
|
The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us. |