Control Seminars @ UCI

October 17, 2017
Bin Hu
University of Wisconsin, Madison

Dissipativity theory for optimization and machine learning research

Time and place: October 17, 2017, 11:30am
McDonnell Douglas Engineering Auditorium (MDEA)

Abstract Empirical risk minimization (ERM) is a central topic for machine learning research, and is typically solved using first-order optimization methods whose convergence proofs are derived in a case-by-case manner. In this talk, we will present a simple routine which unifies the analysis of such optimization methods including gradient descent method, Nesterov's accelerated method, stochastic gradient descent (SGD), stochastic average gradient (SAG), SAGA, Finito, stochastic dual coordinate ascent (SDCA), stochastic variance reduction gradient (SVRG), and SGD with momentum. Specifically, we will view all these optimization methods as dynamical systems and then use a unified dissipativity approach to derive sufficient conditions for convergence rate certifications of such dynamical systems.The derived conditions are all in the form of linear matrix inequalities (LMIs). We solve these resultant LMIs and obtain analytical proofs of new convergence rates for various optimization methods (with or without individual convexity). Our proposed analysis can be automated for a large class of first-order optimization methods under various assumptions. In addition, the derived LMIs can always be numerically solved to provide clues for constructions of analytical proofs

Bio: Bin Hu received his B.Sc in Theoretical and Applied Mechanics from the University of Science and Technology of China, and received the M.S. in Computational Mechanics from Carnegie Mellon University. He received the Ph.D in Aerospace Engineering and Mechanics at the University of Minnesota, advised by Peter Seiler. He is currently a postdoctoral researcher in the optimization group of Wisconsin Institute for Discovery at the University of Wisconsin-Madison. He is working with Laurent Lessard and closely collaborating with Stephen Wright. He is interested in building connections between control theory and machine learning research. His current research focuses on tailoring robust control theory (integral quadratic constraints, dissipation inequalities, jump system theory, etc) to unify the study of stochastic optimization methods (stochastic gradient, stochastic average gradient, SAGA, SVRG, Katyusha momentum, etc) and their applications in related machine learning problems (logistic regression, deep neural networks, matrix completion, etc).

May 19, 2017
Mihailo Jovanovic
Ming Hsieh Department of Electrical Engineering
Director, Center for Systems and Control
Viterbi School of Engineering
University of Southern California

Controller Architectures: Tradeoffs between Performance and Complexity

This talk describes the design of controller architectures that achieve a desired tradeoff between performance of distributed systems and controller complexity. Our methodology consists of two steps. First, we design controller architecture by incorporating regularization functions into the optimal control problem and, second, we optimize the controller over the identified architecture. For large-scale networks of dynamical systems, the desired structural property is captured by limited information exchange between physical and controller layers and the regularization term penalizes the number of communication links. In the first step, the controller architecture is designed using a customized proximal augmented Lagrangian algorithm. This method exploits separability of the sparsity-promoting regularization terms and transforms the augmented Lagrangian into a form that is continuously differentiable and can be efficiently minimized using a variety of methods. Although structured optimal control problems are, in general, nonconvex, we identify classes of convex problems that arise in the design of symmetric systems, undirected consensus and synchronization networks, optimal selection of sensors and actuators, and decentralized control of positive systems. Examples are provided to demonstrate the effectiveness of the framework.

Time and place: May 19, 2017, 2:00pm
McDonnell Douglas Engineering Auditorium (MDEA)

May 10, 2017
Christian Grussler
Lund University

Low-Rank Inducing Norms with Optimality Interpretations

This talk is on optimization problems which are convex apart from a sparsity/rank constraint. These problems are often found in the context of compressed sensing, linear regression, matrix completion, low-rank approximation and many more. Today, one of the most widely used methods for solving these problems is so-called nuclear norm regularization. Despite the nice probabilistic guarantees of this method, this approach often fails for problems with structural constraints.

In this talk, we will present an alternative by introducing the family of so-called low-rank inducing norms as convexifiers. Each norm is the convex envelope of a unitarily invariant norm plus a rank constraint. Therefore, they have several interesting properties, which will be discussed throughout the talk. They:

i. Give a simple deterministic test if the solution to the convexified problem is a solution to a specific non-convex problem.

ii. Often finds solutions where the nuclear norm fails to give low-rank solutions.

iii. Allow us to analyze the convergence of non-convex proximal splitting algorithms with convex analysis tools.

iv. Provide a more efficient regularization than the traditional scalar multiplication of the nuclear norm.

v. Leads to a different interpretation of the nuclear norm than the one that is traditionally presented.

In particular, all the results can be generalized to so-called atomic norms.

Time and place: May 10, 2017, 11:00am
EG 3161

January 19, 2017
Dr. Ge Chen (chenge@amss.ac.cn)
Academy of Math & System Science, Chinese Academy of Sciences Beijing 100190, P.R. China

Analysis and control on collective behavior of some random complex systems

Abstract: Complex systems exist almost everywhere in nature, and human social and economic systems, and so have generated considerable interest in researchers from various fields. A central issue of complex system study is an understanding of how local interactions among the elements lead to collective behavior of the whole group. However, the analysis of complex systems is usually difficult and the existing almost unique method is to construct a Lyapunov function. We first based on the Lyapunov method, put forward some quantitative analysis and optimization methods which applied to some biological and engineering systems. Also, we broke through the Lyapunov method and proposed a new analysis method which applied to some biological and social systems.

Time and place: May 19, 2017, 2:00pm
McDonnell Douglas Engineering Auditorium (MDEA)

November 4, 2016
Professor Brad Paden
University of California, Santa Barbara, and LaunchPoint Technologies Inc.

Adventures in Mechatronics

This talk aims to illustrate the creativity, challenge, and professional enjoyment associated with the invention, design, and control of mechatronic systems. Example systems include a life-saving locomotive bumper, a magnetically-coupled MEMS sensor, an electromagnetic launch system, an energy storage system, a magnetic bearing system, a pediatric maglev artificial heart, a guided-catheter system, and a high-speed switching mechanism. While modeling, control and optimization are essential ingredients in mechatronic systems, the large design and application spaces of mechatronic systems compel us to place a high value on innovation at the level of system architectures - this point is illustrated throughout the talk.

Time and place: November 4, 2016, 10:30am
McDonnell Douglas Engineering Auditorium (MDEA)

September 13, 2016
Professor Na Li
Harvard University

Distributed Energy Management with Limited Communication

A major issue in future power grids is how intelligent devices and independent producers can respectively change their power consumption-production to achieve near maximum efficiency for the power network. Limited communications between devices and producers necessitates an approach where the elements of the network can act in an autonomous manner with limited information-communications yet achieve near optimal performance. In this talk, I will present our recent work on distributed energy management with limited communication. In particular, I will show how we can extract information from physical measurements and recover information from local computation. We will also investigate the minimum amount of communication for achieving the optimal energy management and study how limited communication affects the convergence rate of the distributed algorithms. We will conclude the talk with a discussion on challenges and opportunities on distributed optimization and control for future grids.

Time and place: Tuesday, September 13, 2016 2:00pm-3:00pm
McDonnell Douglas Engineering Auditorium (MDEA)