. | . |
Faster optimization by Staff Writers Boston MA (SPX) Oct 28, 2015
Optimization problems are everywhere in engineering: Balancing design tradeoffs is an optimization problem, as are scheduling and logistical planning. The theory - and sometimes the implementation - of control systems relies heavily on optimization, and so does machine learning, which has been the basis of most recent advances in artificial intelligence. This week, at the IEEE Symposium on Foundations of Computer Science, a trio of present and past MIT graduate students won a best-student-paper award for a new "cutting-plane" algorithm, a general-purpose algorithm for solving optimization problems. The algorithm improves on the running time of its most efficient predecessor, and the researchers offer some reason to think that they may have reached the theoretical limit. But they also present a new method for applying their general algorithm to specific problems, which yields huge efficiency gains - several orders of magnitude. "What we are trying to do is revive people's interest in the general problem the algorithm solves," says Yin-Tat Lee, an MIT graduate student in mathematics and one of the paper's co-authors. "Previously, people needed to devise different algorithms for each problem, and then they needed to optimize them for a long time. Now we are saying, if for many problems, you have one algorithm, then, in practice, we can try to optimize over one algorithm instead of many algorithms, and we may have a better chance to get faster algorithms for many problems." Lee is joined on the paper by Aaron Sidford, who was an MIT graduate student in electrical engineering and computer science when the work was done but is now at Microsoft Research New England, and by Sam Wong, who earned bachelor's and master's degrees in math and electrical engineering and computer science at MIT before moving to the University of California at Berkeley for his PhD.
Inner circle At a very general level, finding the minimum of a cost function can be described as trying to find a small cluster of values amid a much larger set of possibilities. Suppose that the total range of possible values for a cost function is represented by the interior of a circle. In a standard optimization problem, the values clustered around the minimum value would then be represented by a much smaller circle inside of the first one. But you don't know where it is. Now pick a point at random inside the bigger circle. In standard optimization problems, it's generally possible to determine whether that point lies within the smaller circle. If it doesn't, it's also possible to draw a line that falls between it and the smaller circle. Drawing that line cuts off a chunk of the circle, eliminating a range of possibilities. With each new random point you pick, you chop off another section of the circle, until you converge on the solution. If you represent the range of possibilities as a sphere rather than a circle, then you use a plane, rather than a line, to cut some of them off. Hence the name for the technique: the cutting-plane method. In most real optimization problems, you need a higher-dimensional object than either a circle or a sphere: You need a hypersphere, which you cut with a hyperplane. But the principle remains the same.
A matter of time With cutting-plane methods, the number of elements is the number of variables in the cost function - the weight of the car, the cost of its materials, drag, legroom, and so on. That's also the dimension of the hypersphere. With the best general-purpose cutting-plane method, the time required to select each new point to test was proportional to the number of elements raised to the power 3.373. Sidford, Lee, and Wong get that down to 3. But they also describe a new way to adapt cutting-plane methods to particular types of optimization problems, with names like submodular minimization, submodular flow, matroid intersection, and semidefinite programming. And in many of those cases, they report dramatic improvements in efficiency, from running times that scale with the fifth or sixth power of the number of variables (n5 or n6, in computer science parlance) down to the second or third power (n2 or n3).
Related Links Massachusetts Institute of Technology Space Tourism, Space Transport and Space Exploration News
|
|
The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us. |