Control Seminars @ UCI

Time and place:
Friday, October 27, 2023 –

Ardalan Vahidi
Clemson University

Efficient Driving with Connected and Automated Vehicles: Optimal Control Under the Hood


Connected and automated vehicles (CAV) are marketed for their increased safety, driving comfort, and time saving potential. With much easier access to information, increased processing power, and precision control, CAVs also offer unprecedented opportunities for energy efficient driving. This talk highlights the energy saving potential of connected and automated vehicles based on first principles of motion, optimal control theory, and practical examples from our previous and ongoing research. Connectivity to other vehicles and infrastructure allows better anticipation of upcoming events, such as hills, curves, state of traffic signals, and movement of neighboring vehicles. Automation allows vehicles to adjust their motion more precisely in anticipation of upcoming events and save energy. Opportunities for cooperative driving could further increase energy efficiency of a group of vehicles by allowing them to move in a coordinated manner. Energy efficient motion of connected and automated vehicles could have a harmonizing effect on mixed traffic, leading to additional energy savings for neighboring vehicles. Latest analytical and experimental results will be shown on energy and traffic flow benefits attained by anticipation and coordination. The benefits are shown in simulated scenarios and in experiments on a test track where urban and highway conditions are emulated.

Time and place:
Thursday, October 26, 2023 – 10am-11am at EG 4-211

Liviu Aolaritei
ETH Zürich

Capture, Propagate, and Control Distributional Uncertainty


In this talk I will challenge the standard uncertainty models, i.e., robust (norm-bounded) and stochastic (one fixed distribution, e.g., Gaussian), and propose to model uncertainty in dynamical systems via Optimal Transport (OT) ambiguity sets. These constitute a very rich uncertainty model, which enjoys many desirable geometrical, statistical, and computational properties, and which: (1) naturally generalizes both robust and stochastic models, and (2) captures many additional real-world uncertainty phenomena (e.g., black swan events). I will then show that OT ambiguity sets are analytically tractable: they propagate easily and intuitively through linear and nonlinear (possibly corrupted by noise) transformations, and the result of the propagation is again an OT ambiguity set or can be tightly upper bounded by one. In the context of dynamical systems, this allows to consider multiple sources of uncertainty (e.g., initial condition, additive noise, multiplicative noise) and to capture in closed-form, via an OT ambiguity set, the resulting uncertainty in the state at any future time. The resulting OT ambiguity sets are also computationally tractable, and can be directly employed in various distributionally robust control formulations that can optimally trade between safety and performance.


Liviu Aolaritei is a PhD student in the Automatic Control Lab at ETH Zürich. Prior to his PhD, he received a M.Sc. Degree in Robotics, Systems, and Control from ETH Zürich, Switzerland, and a B.Sc. Degree in Information Engineering from the University of Padova, Italy. During his PhD, he was a visiting researcher in the Operations Research Center at the Massachusetts Institute of Technology and in the IEOR Department at Columbia University. His current research interests are centered around distributionally robust optimization and optimal transport, as well as their application in automatic control, machine learning, and energy systems.

Time and place: EG 2132
Wednesday, May 26, 2023 – 2pm-3pm

Jared Miller
Robust Systems Lab, Northeastern University, Boston, MA

Quantifying Safety under Uncertainty using Occupation Measures


Safety quantification attaches interpretable numbers to safe trajectories of dynamical systems. Examples of such quantifications include finding the maximum value of a state function along system trajectories, or finding the minimum distance of closest approach to an unsafe set. A safe trajectory with a large distance of closest approach may be acceptable, but an agent that sees a small forecasted distance of closest approach may want to perform actuation to increase this distance. This work represents the peak and distance estimation problems as infinite-dimensional linear programs (LPs) in occupation measures, based on existing work in optimal control and reachable set estimation. These modular LPs can be extended towards modifications in dynamics, such as the analysis of systems with hybrid transitions and/or adverse uncertainty processes. The infinite-dimensional LPs are solved using the moment Sum-of-Squares (SOS) hierarchy, which finds a convergent sequence of outer approximations (under compactness and regularity assumptions) to the true safety quantification task. The LP framework can also be extended towards the safety analysis of stochastic process, in which the Value-at-Risk of the state function is supremized along the trajectories.Joint work with: Mario Sznaier (Northeastern University)


Jared Miller is a Postdoctoral researcher at the Robust Systems Lab at Northeastern University, advised by Mario Sznaier. He received his B.S. and M.S. degrees in Electrical Engineering from Northeastern University in 2018 and his Ph.D. in Electrical Engineering from Northeastern University in 2023. He is a recipient of the 2020 Chateaubriand Fellowship from the Office for Science Technology of the Embassy of France in the United States. He was given an Outstanding Student Paper award at the IEEE Conference on Decision and Control in 2021 and in 2022. His current research topics include safety verification and data-driven control. His interests include large-scale convex optimization, nonlinear systems, semi-algebraic geometry, and measure theory.

Time and place: EG 4211
Wednesday, May 3, 2023 – 11:00 am - noon

Izchak Lewkowicz
Electrical and Computer Engineering Department
Ben-Gurion University Of the Negev, Israel

Quantitatively Hyper-Positive Real Rational Functions

Abstract Colloquially, quantitatively Hyper-Positive functions form a sub-family of the “Strictly Positive real” functions, where in addition state- space realization always exists and the limit at infinity is non-singular. In scalar terminology, such a function maps the right-half of the complex plane into a sub-region of the right-half plane, which can be contained in a finite disk. Moreover, under inversion this disk is mapped onto itself. These functions appear in the “absolute stability” Lurie-type problems. As time permits, in this talk we present properties of these functions. In particular, characterizations as (i) rational functions, (ii) state-space realization (KYP) and (iii) kernels. In passing, the and Hyper-Lyapunov Matrix Inequalities are discussed.

Time and place: McDonnel Douglas Auditorium
Friday, April 14, 2023 – 10:30 am

Sonia Martinez
Department of Mechanical and Aerospace Engineering
University of California, San Diego

Measuring and enhancing network resilience; performance metrics and defense strategies

Abstract Resilience, understood as the ability of a network to carry out its goals under adversarial attacks and unexpected failures, is critical for autonomy. Despite important advances in the design of distributed coordination and decision-making algorithms, multi-agent networks have proven fragile to targeted attacks. Novel theories and tools are therefore needed to guarantee resiliency of these systems, being the development of notions and techniques that characterize network resilience. However, obtaining such characterizations is difficult as resilience and performance are a complex function of the network’s and adversary’s capabilities, knowledge, resources, and the network interconnection structure. At the same time, we also need novel design methodologies that can protect multi-agent networks and adaptively manage their interconnection over time to achieve performance guarantees. In this talk, we present our recent progress in these directions.

Short Bio Sonia Mart ́ınez is a Full Professor at the Department of Mechanical and Aerospace Engineering at the University of California, San Diego and a Jacobs Faculty Scholar. Prof. Mart ́ınez received her B.S. degree from the Universidad de Zaragoza, Spain in 1997, and her Ph.D. degree in Engineering Mathematics from the Universidad Carlos III de Madrid, Spain, in May 2002. Following a year as a Visiting Assistant Professor of Applied Mathematics at the Technical University of Catalonia, Spain, she obtained a Postdoctoral Fulbright Fellowship and held appointments at the Coordinated Science Laboratory of the University of Illinois, Urbana-Champaign during 2004, and at the Center for Control, Dynamical systems and Computation (CCDC) of the University of California, Santa Barbara during 2005. From January 2006 to June 2010, she was an Assistant Professor with the department of Mechanical and Aerospace Engineering at the University of California, San Diego. From July 2010 to June 2014, she was an Associate Professor with the department of Mechanical and Aerospace Engineering at the University of California, San Diego. Dr Mart ́ınez’ research interests include networked control systems, multi-agent systems, and nonlinear control theory with applications to robotics, cyber-physical systems, and natural/social networks. In particular, she has focused on the modeling and control of robotic sensor networks, the development of distributed coordination algorithms for groups of autonomous vehicles, and the geometric control of mechanical systems. For her work on the control of underactuated mechanical systems she received the Best Student Paper award at the 2002 IEEE Conference on Decision and Control. She was the recipient of a NSF CAREER Award in 2007. For the co-authored papers ”Motion coordination with Distributed Information,” and ”Tutorial on dynamic average consensus: The problem, its applications, and the algorithms”, she received respectively the 2008 and 2021 Control Systems Magazine Outstanding Paper Award. She is a Senior Editor of Automatica and an IEEE Fellow. Recently, she was named the inaugural Editor in Chief of a new Control System Society publication, the IEEE Open Journal of Control Systems (IEEE OJCS).

Time and place: McDonnel Douglas Auditorium
Time: September 23, 2022, 10:30am-11:30am

Mehran Mesbahi
Department of Aeronautics & Astronautics
University of Washington, Seattle

Aerospace Networks, Autonomy, and Control

Abstract The talk will center on emerging problems in aerospace engineering that pertain to autonomy and control of networked systems, and focus on three specific angles of research by my group. First, I will discuss networked space systems, bringing out the perspective of “form and function” and highlighting the role of network structure for synchronization, coverage, formation flight, and global broadband connectivity. Next, I will overview on-going projects in my group in real-time computational guidance for Lunar and Martian autonomous pinpoint landing. Lastly, I will discuss our work on system synthesis using first order methods and data-guided control, aimed at efficiently interfacing guidance and control, and expand on outstanding problems at the intersection of control, optimization, and learning. I will close with highlights of our efforts on space systems research, education, and outreach at the University of Washington.

Short Bio Mehran Mesbahi is the J. Ray Bowen Endowed Professor of Aeronautics and Astronautics, Adjunct Professor of Electrical and Computer Engineering and Mathematics, and Executive Director of Joint Center for Aerospace Technology Innovation at the University of Washington. He is a Fellow of IEEE and recipient of NASA Space Act Award, University of Washington Distinguished Teaching Award, and University of Washington College of Engineering Innovator Award. He is the co-author of the book “Graph Theoretic Methods in Multiagent Networks” published by Princeton University Press. His research interests are distributed and networked space systems, autonomy, control theory, and learning.

Time and place: Wednesday May 18, 2022, Harut Barsamian Colloquia Room (EH 2430)
Time: 1:30pm-2:30pm

Geir Dullerud
Department of Mechanical Science and Engineering
University of Illinois at Urbana-Champaign

Learning for Safety and Control in Dynamical Systems

Abstract The presentation will focus on two distinct topics involving the application of learning techniques to analysis of dynamical systems. First: we present an algorithm and a tool for statistical model checking (SMC) of continuous state space Markov chains initialized to a prescribed set of states. This model checking problem requires maximization of probabilities of sets of executions over all choices of initial states. We observe that it can be formulated as an X- armed bandit problem, and therefore, can be solved using hierarchical optimistic optimization. We propose a new algorithm (HOO-MB) and provide a regret bound on its sample efficiency which relies on the smoothness and the near-optimality dimension of the objective function as well as the sampling batch size. The batch size parameter enables us to strike a balance between the sample efficiency and the memory usage of the algorithm. Our experiments, using the tool HooVer, suggest that the approach scales to realistic-sized problems and is often more sample-efficient compared to other existing tools. Second: we present recent results on the global convergence of policy gradient methods for quadratic optimal control of discrete-time Markovian jump linear systems (MJLS); switching is a common feature in systems that are comprised of interacting software and physical processes, and MJLS are models in which discrete states evolve according to a finite Markov chain and continuous states evolve according to linear dynamics specified by these discrete states. We study the optimization landscape of direct policy optimization for MJLS. Despite the non-convexity of the resultant problem, we are able to identify several useful properties such as coercivity, gradient dominance, and smoothness. Based on these properties, we demonstrate global convergence of three types of flows: the Gauss-Newton flow, the natural gradient flow, and the gradient flow. Then we discretize these flows as the Gauss-Newton method, the natural policy gradient method, and the policy gradient method, and prove that all three methods converge to the optimal state feedback controller for MJLS at a linear rate if initialized at a controller which stabilizes the closed-loop dynamics in the mean square sense. Finally, numerical examples are presented to support our theory. This work brings new insights for understanding the performance of policy gradient methods on the Markovian jump linear quadratic control problem. Also presented will be the HoTDeC multi- vehicle testbed, which consists of indoor airborne and ground-based vehicles.

Short Bio Geir E. Dullerud is the W. Grafton and Lillian B. Wilkins Professor in Mechanical Engineering at the University of Illinois at Urbana-Champaign, where he is the Founding Director of the Illinois Center for Autonomy. He is also a member of the Coordinated Science Laboratory, and is an Affiliate Professor of both Computer Science, and Electrical and Computer Engineering. He has held visiting positions in Electrical Engineering KTH, Stockholm (2013), and Aeronautics and Astronautics, Stanford University (2005-2006). Earlier he was on faculty in Applied Mathematics at the University of Waterloo (1996-1998), after being a Research Fellow at the California Institute of Technology (1994- 1995), in the Control and Dynamical Systems Department. He holds a PhD in Engineering from Cambridge University. He has published two books: “A Course in Robust Control Theory”, Texts in Applied Mathematics, Springer, and “Control of Uncertain Sampled-data Systems”, Birkhauser. His areas of current research interest include autonomy and cooperative robotics, convex optimization in control, cyber-physical system security, stochastic simulation, and hybrid dynamical systems. In 1999 he received the CAREER Award from the National Science Foundation, and in 2005 the Xerox Faculty Research Award at UIUC. In 2018 he was awarded the UIUC Engineering Council Award for Excellence in Advising. He is a Fellow of both IEEE (2008) and ASME (2011). He was the General Chair of the recent IFAC workshop Distributed Estimation and Control in Networked Systems (NECSYS2019).

Time and place: Thursday, March 31, 2022, EG 4211
Time: 10:00am-11:00pm

Takashi Tanaka
Department of Aerospace Engineering and Engineering Mechanics
The University of Texas at Austin

Minimum-Information Kalman-Bucy Filtering and Fundamental Limitation of Continuous-Time Data Compression


Motivated by a practical scenario where a continuous-time source signal is encoded, compressed, and transmitted to a remote user where the signal is reproduced in a real-time manner (e.g., streaming of neuromorphic camera data), we study the fundamental trade-off between the encoding data rate and the best achievable data quality (distortion). After briefly reviewing the “causal” rate-distortion theory in discrete-time, in this talk, we consider the problem of estimating a continuous-time Gauss-Markov source process observed through a vector Gaussian channel with an adjustable channel gain matrix. For a given (generally time-varying) channel gain matrix, we provide formulas to compute (i) the mean-square estimation error attainable by the classical Kalman-Bucy filter, and (ii) the mutual information between the source process and its Kalman-Bucy estimate. We then formulate a novel “optimal channel gain control problem” where the objective is to control the channel gain matrix strategically to minimize the weighted sum of these two performance metrics. To develop insights into the optimal solution, we first consider the problem of controlling a time-varying channel gain over a finite time interval. A necessary optimality condition is derived based on Pontryagin's minimum principle. For a scalar system, we show that the optimal channel gain is a piece-wise constant signal with at most two discontinuities. We also consider the problem of designing the optimal time-invariant gain to minimize the average cost over an infinite time horizon. A novel semidefinite programming (SDP) heuristic is proposed to compute the optimal solution.

Short Bio Takashi Tanaka is an Assistant Professor in the Department of Aerospace Engineering and Engineering Mechanics at the University of Texas at Austin since 2017. He received his B.S. degree from the University of Tokyo in 2006, M.S. and Ph.D. degrees from UIUC in 2009 and 2012, all in Aerospace Engineering. Prior to joining UT Austin, he held postdoctoral researcher positions at MIT and KTH Royal Institute of Technology. His research interest is broad in control, optimization, games, and information theory; most recently their applications to networked control systems, real-time data sharing, and strategic perception. He is the recipient of the DARPA Young Faculty Award, the AFOSR Young Investigator Program award, and the NSF Career award.

Time and place: Tuesday February 22, 2022, 1:00-2:00pm

John Baras
Professor, Lockheed Martin Chair in Systems Engineering
Distinguished University Professor
University of Maryland

From Robust Control to Robust Machine and Reinforcement Learning: A Unifying Theory via Performance and Risk Tradeoff

Abstract Robustness is a fundamental concept in systems science and engineering. It is a critical consideration in all inference and decision making problems, both single agent and multi-agent ones. It has surfaced again in recent years in the context of machine learning (ML), reinforcement learning (RL) and artificial intelligence (AI). We describe a novel and unifying theory of robustness for all these problems emanating form the fundamental results we obtained in my research group some 25 year ago on robust output feedback control for general systems (including nonlinear, HMM and set-valued). In the first part of this lecture I will summarize this theory and the universal solution it provides consisting of two coupled HJB equations. These results are a sweeping generalization of the transformational control theory results on the linear quadratic gaussian problem obtained by Jacobson, Speyer, Doyle, Glover, Khargonekar, Francis, in the 1970’s and 1980’s. Our results rigorously established the equivalence of three seemingly unrelated problems: the robust output feedback control problem, a partially observed differential game, and a partially observed risk sensitive stochastic control problem. In the second part of this lecture I will start by showing the “four block” view of this problem and show for the first time a similar formulation of the so-called robust (or adversarially robust) ML problem. Thus we have a rigorous path to analyze robustness and attack resiliency in ML. I will show several examples. I will also describe how using an exponential criterion in deep learning explains the convergence of stochastic gradients despite over-parametrization (Poggio 2020). Then I will describe our most recent results on robust and risk sensitive reinforcement learning (RL). Here the emergence of exponential of an integral criterion from our earlier theory is essential. We show how all forms of regularized RL can be derived from our theory, including KL and Entropy regularization, relation to probabilistic graphical models, distributional robustness. The deeper reason for this unification emerges: it is the fundamental tradeoff between performance optimization and risk minimization in decision making, via duality. This connects to Prospect Theory. I will close with open problems and future research.

Short Bio John S. Baras is a Distinguished University Professor, holding the Lockheed Martin Chair in Systems Engineering and a Permanent Joint Appointment with the Institute for Systems Research (ISR) and the ECE Department at the University of Maryland College Park (UMD). He received his Ph.D. degree in Applied Mathematics from Harvard University, in 1973, and he has been with UMD since then. From 1985 to 1991, he was the Founding Director of the ISR. Since 1992, he has been the Director of the Maryland Center for Hybrid Networks (HYNET), which he co-founded. He is a Fellow of IEEE (Life), SIAM, AAAS, NAI, IFAC, AMS, AIAA, Member of the National Academy of Inventors (NAI) and a Foreign Member of the Royal Swedish Academy of Engineering Sciences (IVA). Major honors and awards include the 1980 George Axelby Award from the IEEE Control Systems Society, the 2006 Leonard Abraham Prize from the IEEE Communications Society, the 2017 IEEE Simon Ramo Medal, the 2017 AACC Richard E. Bellman Control Heritage Award, and the 2018 AIAA Aerospace Communications Award. In 2016 he was inducted in the University of Maryland A. J. Clark School of Engineering Innovation Hall of Fame. In June 2018 he was awarded a Doctorate Honoris Causa by his alma mater the National Technical University of Athens, Greece. His research interests include systems, control, optimization, autonomy, machine learning, artificial intelligence, communication networks, applied mathematics, signal processing and understanding, robotics, computing systems, formal methods and logic, network security and trust, systems biology, healthcare management, model-based systems engineering. He has been awarded twenty patents and honored with many awards as innovator and leader of economic development.

Time and place: Friday, October 1, 2021 - 4:00-5:00pm
zoom meeting (click here)

Izchak Lewkowicz
Professor, School of Electrical and Computer Engineering
Ben-Gurion University of the Negev, Beer-Sheva, Israel

Passive Linear Time-invariant Systems - Characterization through Structure

Abstract Passivity is a basic physical property. We here show that the family linear time-invariant passive systems may be characterized by the structure of the whole set, which turn to be matrix-convex.

In the continuous-time case it is in addition a cone closed under inversion, and maximal non-singular/analytic. In the discrete-time framework, this matrix-convex family is in addition a maximal set closed under product among its elements. A future application will be presented.

This talk is aimed at students as well.

Short Bio BSc, MSc, DSc in Electrical Engineering, Technion, Haifa (1979, 1986, 1990 respectively). Post-Doc: Institute of Math and Appl., Minneapolis MN, 1992 Imperial College, London UK, 1993-1996 From 1997, a faculty member in the School of Electrical and Computer Engineering, Ben-Gurion University, Israel. Associate editor in “Complex Analysis and Operator Theory” (Birkhauser)

Time: Friday, April 16, 2021 - 10:30 a.m PST
Place: https:uci.zoom.usj96009906336?pwd=T0ZxR1J1eG51U0NRNXpLbjVjZ2RWQT09

Reconcilable Differences

Abstract I will discuss several old and new examples of extracting dynamic models from data using techniques from manifold learningmachine learning. I will then focus on the problem of matching different models of the same dataphenomenon: the construction of data-driven diffeomorphisms that map different realizations of the same “truth” to each other. I will discuss several different cases: matching models across scales and across fidelities, matching physical models with ML ones and matching different neural network models. I will also describe a useful tool for the data-driven construction of such “mirrors,” matching systems to each other: a local conformal auto-encoder.

Short Bio

Yannis Kevrekidis is the Bloomberg Distinguished Professor in the Departments of Chemical and Biomolecular Engineering, and Applied Mathematics and Statistics and in the Johns Hopkins University School of Medicine’s Department of Urology.

Kevrekidis is a member of the American Academy of Arts and Sciences and has been a Packard Fellow, an NSF Presidential Young Investigator and a Guggenheim Fellow. He holds the Colburn, the Wilhelm, and the Computing in Chemical Engineering awards of the AIChE; the Crawford Prize and the W.T. and Idalia Reid Prize of SIAM; and a Senior Humboldt Prize. He has been the Gutzwiller Fellow at the Max Planck Institute for the Physics of Complex Systems in Dresden and a Rothschild Distinguished Visitor at the Newton Institute at Cambridge University. He is currently a senior Hans Fischer Fellow at IAS-TUM in Munich and an Einstein Visiting Fellow at FU/Zuse Institut Berlin. In 2015, he was elected a corresponding member of the Academy of Athens. He also holds a career Teaching Award from the school of engineering at Princeton University.

Kevrekidis earned a bachelor’s degree in chemical engineering at the National Technical University in Athens and a doctorate form the University of Minnesota’s Department of Chemical Engineering and Materials Science. He arrived at Johns Hopkins in 2017 after serving as the Pomeroy and Betty Perry Smith Professor in Engineering at Princeton University, where he was professor of chemical and biological engineering, senior faculty in applied and computational mathematics and associate faculty member in mathematics.

For more info, see https:engineering.jhu.educhembefacultyyannis-kevrekidis.

Time: Wednesday, January 27 from 9:00-10:00am PST
Place: https:uci.zoom.usj95409976083?pwd=UlNia2xOQ1pNaytiWGlSVjRaUk4vQT09

Safe learning-based control using a Model Predictive Control

Abstract Learning has seen great interest in the domain of automatic control and various demonstrations show the potential of learning-based control paradigms. The question of safety has been recognized as a central challenge for the widespread success of these promising techniques in real-life and industrial settings. While different notions of safety exist, I will focus on the satisfaction of critical safety constraints in this talk, a common and intuitive form of specifying safety in many applications. Model predictive control (MPC) is an established control technique for addressing constraint satisfaction with demonstrated success in various industries. However, it requires a sufficiently descriptive system model as well as a suitable formulation of the control objective to provide the desired guarantees and solve the problem via numerical optimization. Reinforcement learning, in contrast, has demonstrated its success for complex problems where a mathematical problem representation is not available, by directly interacting with the system, but usually at the cost of safety guarantees.

In this talk, I will present different options how learning and MPC can be combined to overcome some of the individual difficulties of both MPC and available reinforcement learning methods. I will in particular discuss learning for inferring a model of the system dynamics, or for tuning the objective in MPC, as well as the use of MPC as a safety filter, providing a modular approach for augmenting high-performance learning-based controllers with constraint satisfaction properties. The results will be highlighted using examples from robotics.

Short Bio

Bio: Melanie Zeilinger is an Assistant Professor at the Department of Mechanical and Process Engineering at ETH Zurich, Switzerland where she leads the Intelligent Control Systems group. She received the Diploma degree in engineering cybernetics from the University of Stuttgart, Germany, in 2006, and the Ph.D. degree with honors in electrical engineering from ETH Zurich, Switzerland, in 2011. From 2011 to 2012 she was a Postdoctoral Fellow with the Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland. She was a Marie Curie fellow and Postdoctoral Researcher with the Max Planck Institute for Intelligent Systems, Tübingen, Germany until 2015 and with the Department of Electrical Engineering and Computer Sciences at the University of California at Berkeley, CA, USA, from 2012 to 2014. From 2018 to 2019 she was a professor at the University of Freiburg, Germany. Her awards include the ETH medal for her PhD thesis and an SNF Professorship grant. She is one of the organizers of the new Conference on Learning for Dynamics and Control (L4DC). Her research interests include safe learning-based control, as well as distributed control and optimization, with applications to robotics and human-in-the-loop control.

Time and place: Friday, May 29, 2020
Time: 10:30am - noon
via zoom

Game Theory and Self-organizing Decision Systems


The language of game theory naturally lends itself to distributed decision architectures, where individual, yet interconnected, actors take decisions based on local information and interactions. From the perspective of a system planner, a goal is to incentivize individual behaviors to induce desirable global outcomes. From the perspective of a system modeler, a goal is to understand possible emergent behaviors stemming from local interactions. This talk presents how game theory — more specifically game-theoretic learning — can address these issues in both natural and artificial settings, with examples drawn from distributed matching, multi-agent reinforcement learning and swarm robotics.

Short Bio

Jeff S. Shamma is a professor of electrical engineering and director for the Center of Excellence for NEOM Research at the King Abdullah University of Science and Technology, Saudi Arabia (KAUST). Prior to joining KAUST, he was the Julian T. Hightower Chair in Systems & Control at the Georgia Institute of Technology. Shamma received a doctorate in systems science and engineering from MIT in 1988. He is the recipient of an NSF Young Investigator Award, the AACC Donald P. Eckman Award, and the IFAC High Impact Paper Award, and he is a fellow of IEEE and IFAC. Shamma is currently serving as the editor-in-chief for the IEEE Transactions on Control of Network Systems and an associate editor of the IEEE Transactions on Robotics.

Time and place: Friday, May 22, 2020
Time: 10:30am - noon
via zoom

Cybergenetics - Theory and Methods for Building Genetic Control Systems in Living Cells


Humans have been influencing the DNA of plants and animals for thousands of years through selective breeding. Yet, it is only over the last three decades or so that we have gained the ability to manipulate the DNA itself and directly alter its sequences through the modern tools of genetic engineering. This has revolutionized biotechnology and ushered in the era of synthetic biology. Among the possible applications enabled by synthetic biology is the design and engineering of feedback control systems that act at the molecular scale in real-time to steer the dynamic behavior of living cells. Here, I will present our theoretical framework for the design and synthesis of such control systems and will discuss the main challenges in their practical implementation. I will then present the first designer gene network that attains integral feedback in a living cell and demonstrate its tunability and disturbance rejection properties. A growth control application shows the inherent capacity of this integral feedback control system to deliver robustness and highlights its potential use as a universal controller for regulation of biological variables in arbitrary networks. Finally, I will discuss the potential impact of biomolecular control systems in industrial biotechnology and medical therapy and bring attention to the opportunities that exist for control theorists to advance this young area of research.

Short Bio

Mustafa Khammash is the professor for control theory and systems biology at the Department of Biosystems Science and Engineering at ETH Zurich, Switzerland. He works in the areas of control theory, systems biology and synthetic biology. His lab develops theoretical, computational and experimental methods aimed at understanding the role of dynamics, feedback and randomness in biology. He is currently developing new theoretical and experimental approaches for the design of biomolecular control systems and for their realization in living cells. Khammash received his bachelor's degree from Texas A&M University in 1986 and his doctorate from Rice University in 1990, both in electrical engineering. In 1990, he joined the engineering faculty of Iowa State University, where he created the Dynamics and Control Program and led the control group until 2002. He then joined the engineering faculty at the UC Santa Barbara, where he was director of the Center for Control, Dynamical Systems and Computation until 2011, when he joined ETH Zurich. He is a fellow of the IEEE, IFAC and the Japan Society for the Promotion of Science.

Time and place: Wednesday 412019 (Apr. 1st) – POSTPONED due to Covid-19
Time: 11am-12noon
Room 4211 in the EG Building

New Results on Old Problems from Adaptive Least-Squares and Dynamic Games


Our talk covers recent results on achieving adaptive performance in least-squares algorithm and computing feedback Nash equilibria in LQG dynamic games.

First we will discuss results on recursive least-squares and related problems. Recursive least-squares algorithms use forgetting factors to adapt to non-stationary data. However, the forgetting factors are typically set heuristically. We show how to tune the forgetting factor to achieve near-optimal adaptive performance under a metric known as dynamic regret. Our results apply to a collection of forgetting factor algorithms which includes recursive least-squares as well as variants of Newton's method and gradient descent.

Next we will describe recent work on the computation of feedback Nash equilibria of non-zero sum linear quadratic Gaussian (LQG) dynamic games. Dynamic games arise widely controls and economics. However, characterizing and computing the feedback Nash equilibria for non-zero sum dynamic games is challenging, even with full state feedback. We will describe new sufficient conditions for the existence of a unique feedback Nash equilibrium based on classical Riccati equations. Our conditions apply to both state-feedback and output-feedback problems. When the sufficient conditions hold, we then show how the equilibrium can be approximated via a projected gradient algorithm.

Short Bio

Andrew Lamperski received the B.S. degree in biomedical engineering and mathematics in 2004 from the Johns Hopkins University, Baltimore, MD, and the Ph.D. degree in control and dynamical systems in 2011 from the California Institute of Technology. He held postdoctoral positions in control and dynamical systems at the California Institute of Technology from 2011 - 2012 and in mechanical engineering at The Johns Hopkins University in 2012. From 2012 - 2014, did postdoctoral work in the Department of Engineering, University of Cambridge, on a scholarship from the Whitaker International Program. In 2014, he joined the Department of Electrical and Computer Engineering, University of Minnesota as an Assistant Professor. His research interests include optimal control and machine learning, with applications to neuroscience and robotics.

Time and place: Friday 1112019 (Nov. 1st)
Time: 4pm-5:30pm
Room 3008 in the Calit2 Building

Combinatorial Motion Planning Algorithms for Mobile Robots


Mobile robots such as unmanned aerial, ground and underwater vehicles are widely used in civil and military applications for monitoring and surveillance. In this talk, I will provide a general introduction to a class of planning problems involving multiple robots we have focused on over the last decade. Our goal is to develop computational tools that not only aims at finding feasible solutions to the mission planning problems but also provides theoretical guarantees on the quality of these solutions. A brief overview of the problem set up, approach and computational tools will be presented.

Short Bio

Dr. Sivakumar Rathinam is an Associate Professor in the Department of Mechanical Engineering at Texas A&M University. He received a Ph.D. degree from the University of California, Berkeley in 2007. He worked as a research scientist at the NASA Ames Research Center in California from 2007 to 2008. He has been at Texas A&M since 2009. His research interests include autonomous vehicles, motion planning, optimization, vision based control, and air traffic control. He is an Associate Editor of IEEE Transactions on Robotics and Automation Letters, and the ASME Journal on Dynamic Systems, Measurement, and Control. He was awarded the Air Force Faculty Fellowship in 2015 and received the best paper award in the 2015 International Conference on Unmanned Aircraft Systems. He is also a senior member of the IEEE.

Time and place: Thursday, May 9
Time: 11am-12noon
Engineering Gateway 4211

The feedback particle filter (FPF) algorithm

Amirhossein Taghvaei, UIUC


In this talk, I will give an overview of the feedback particle filter (FPF) algorithm. FPF is a numerical algorithm, comprised of a system of controlled interacting particles, designed to approximate the solution to the nonlinear filtering problem. I present and discuss three important questions about FPF: (1) How to design the control law; (2) How to approximate control law in terms of finite number of particles; (3) and what is the error and long-term stability property of the system with finite number of particles.

Short Bio

Amirhossein Taghvaei is a Ph.D. student at University of Illinois at Urbana-Champaign (UIUC) in the department of Mechanical Engineering. He received his Masters degree in Mathematics from UIUC, and two B.S degrees in Mechanical Engineering and in Physics from Sharif University of Technology, Iran. He is a member of the Decision and Control group in Coordinated Science Laboratory advised by Prof. Prashant Mehta. His research interest lies in the intersection of control theory and machine learning.

Time and place: Friday, April 26
Time: 10:30-11:30am
McDonnell Douglas Engineering Auditorium

Secure state-estimation and control for dynamical systems under adversarial attacks

Paulo Tabuada, UCLA


Control systems work silently in the background to support much of the critical infrastructure we have grown used to. Water distribution networks, sewer networks, gas and oil networks, and the power grid are just a few examples of critical infrastructure that rely on control systems for its normal operation. These systems are becoming increasingly networked both for distributed control and sensing, as well as for remote monitoring and reconfiguration. Unfortunately, once these systems become connected to the internet they become vulnerable to attacks that, although launched in the cyber domain, have for objective the manipulation of the physical domain. In this talk I will discuss the problem of state-estimation and control for linear dynamical systems when some of the sensor measurements are subject to an adversarial attack. I will show that a separation result holds so that controlling physical systems under active adversaries can be reduced to a state-estimation problem under active adversaries. I will characterize the maximal number of attacked sensors under which state estimation is possible and propose computationally feasible estimation algorithms.

Short Bio

Paulo Tabuada was born in Lisbon, Portugal, one year after the Carnation Revolution. He received his “Licenciatura” degree in Aerospace Engineering from Instituto Superior Tecnico, Lisbon, Portugal in 1998 and his Ph.D. degree in Electrical and Computer Engineering in 2002 from the Institute for Systems and Robotics, a private research institute associated with Instituto Superior Tecnico. Between January 2002 and July 2003 he was a postdoctoral researcher at the University of Pennsylvania. After spending three years at the University of Notre Dame, as an Assistant Professor, he joined the Electrical and Computer Engineering Department at the University of California, Los Angeles, where he currently is the Vijay K. Dhir Professor of Engineering. Paulo Tabuada's contributions to cyber-physical systems have been recognized by multiple awards including the NSF CAREER award in 2005, the Donald P. Eckman award in 2009, the George S. Axelby award in 2011, the Antonio Ruberti Prize in 2015, and the grade of fellow awarded by IEEE in 2017. He has been program chair and general chair for several conferences in the areas of control and of cyber-physical systems such as NecSys, HSCC, and ICCPS. He currently serves as the chair of HSCC’s steering committee and he served on the editorial board of the IEEE Embedded Systems Letters and the IEEE Transactions on Automatic Control.

Time and place: Friday, April 19
Time: 2:00pm-3:00pm
Calit2 Conference Room 3008

A Decoupling Principle in Stochastic Optimal Control and Its Implications

Suman Chakravorty, Aerospace Engineering, Texas A&M University


The problem of Stochastic Optimal Control is ubiquitous in Robotics and Control since it is the fundamental formulation for decision-making under uncertainty. The answer to the problem can be computed by solving an associated Dynamic Programming (DP) problem. Unfortunately, the DP paradigm is also synonymous with the infamous “Curse of Dimensionality (COD)”, a phrase coined by the discoverer of the Dynamic Programming paradigm, Richard Bellman, nearly 60 years ago, to capture the fact that the computational complexity of solving a DP problem grows exponentially in the dimension of the state space of the problem.

In this talk, we will introduce a newly discovered paradigm in stochastic optimal control, called “Decoupling”, that allows us to separate the design of the open and closed loops of a stochastic optimal control problem with continuous control space. This Decoupled solution allows us to break the COD inherent in DP problems, while remaining near-optimal, to third order, to the true stochastic control. The implications of the Decoupled design are examined in the context of Model Predictive Control (MPC) and Reinforcement Learning (RL). We shall introduce two algorithms, called the Trajectory Optimized Perturbation Feedback Control (T-PFC), and the Decoupled Data based Control(D2C), for the MPC and RL problems respectively. We shall also examine the consequences of the decoupling principle in partially observed/ belief space planning problems and present the Trajectory optimized Linear Quadratic Gaussian (T-LQG) algorithm.


Suman Chakravorty obtained his B.Tech in Mechanical Engineering in 1997 from the Indian Institute of Technology, Madras and his PhD in Aerospace Engineering from the University of Michigan, Ann Arbor in 2004. From August 2004- August 2010, he was an Assistant Professor with the Aerospace Engineering Department at Texas A&M University, College Station and since August 2010, he has been an Associate Professor in the department. Dr. Chakravorty’s broad research interests lie in the estimation and control of stochastic dynamical systems with application to autonomous, distributed robotic mapping and planning, and situational awareness problems. He is a member of AIAA, ASME and IEEE. He is an Associate Editor for the ASME Journal on Dynamical Systems, Measurement and Control and the IEEE Robotics and Automation Letters.

Time and place: Friday, October 26, 2018
Time: 10:30-11:30am
McDonnell Douglas Engineering Auditorium

Stochastic Vehicle Routing for Max Entropic Surveillance

Francesco Bullo, Mechanical Engineering, UCSB


This talk addresses the design of efficient surveillance and vehicle-routing strategies for robotic networks in dynamic environments. We focus on how to search an area in a persistent manner – with minimal average time to detection, with unpredictable trajectories and with optimally partitioned workload among multiple vehicles. The technical approach is based on Markov chains, optimization methods, convexity properties, relaxations and coordination strategies. Coauthors: Xiaoming Duan, Mishel George.


Francesco Bullo is a professor in the Department of Mechanical Engineering and the Center for Control, Dynamical Systems and Computation at UC Santa Barbara. His research interests focus on network systems and distributed control with application to robotic coordination, power grids and social networks. He is the coauthor of “Geometric Control of Mechanical Systems” (Springer, 2004) and “Distributed Control of Robotic Networks” (Princeton, 2009); his “Lectures on Network Systems” (CreateSpace, 2018) is available on his website. Bullo received best paper awards for his work in IEEE Control Systems, Automatica, SIAM Journal on Control and Optimization, IEEE Transactions on Circuits and Systems, and IEEE Transactions on Control of Network Systems. He is a fellow of the IEEE and IFAC. He has served on the editorial boards of IEEE, SIAM and ESAIM journals, and he serves as the 2018 IEEE CSS president.

Time and place: Friday, October 19, 2018
Time: 1:00-2:00pm
Engineering Gateway 4211

Stochastic Control of Finite and Infinite Dimensional Systems Under Uncertainty: Theory, Algorithms and Applications

Evangelos Theodorou, Aerospace Engineering, GaTech


In this talk, I will present an overview of projects related to stochastic control and machine learning methods and their applications to dynamical systems represented by stochastic differential and stochastic partial differential equations. These are typically systems that exists in autonomy and robotics as well as in areas of applied physics such as fluid mechanics, plasma physics and turbulence. I will discuss different forms of uncertainty representation that span Gaussian Processes, Polynomial Chaos, Deep Probabilistic Neural Networks and Q-Wiener processes. Finally I will show applications to robotic terrestrial agility, perceptual control, social networks, large scale swarms, and control of stochastic fields, and conclude with future directions.


Evangelos A. Theodorou is an assistant professor with the Guggenheim School of aerospace engineering at Georgia Institute of Technology. He is also affiliated with the Institute of Robotics and Intelligent Machines. Evangelos Theodorou earned his Diploma in Electronic and Computer Engineering from the Technical University of Crete (TUC), Greece in 2001. He has also received a MSc in Production Engineering from TUC in 2003, a MSc in Computer Science and Engineering from University of Minnesota in spring of 2007 and a MSc in Electrical Engineering on dynamics and controls from the University of Southern California(USC) in Spring 2010. In May of 2011 he graduated with his PhD, in Computer Science at USC. After his PhD, he was a Postdoctoral Research Fellow with the department of computer science and engineering, University of Washington, Seattle. Evangelos Theodorou is the recipient of the King-Sun Fu best paper award of the IEEE Transactions on Robotics for the year 2012 and recipient of the best paper award in cognitive robotics in International Conference of Robotics and Automation 2011. He was also the finalist for the best paper award in International Conference of Humanoid Robotics 2010, International Conference of Robotics and Automation 2017 and Robotics Science and Systems 2018. His theoretical research spans the areas of stochastic optimal control theory, machine learning, information theory and statistical physics. Applications involve learning, planning and control in autonomous, robotics and aerospace systems.

Time and place: Friday, April 13, 2018
Time: 10:30-11:30am
McDonnell Douglas Engineering Auditorium

Wind Farm Modeling and Control for Power Grid Support

Dennice Gayme, Johns Hopkins University


Traditional wind farm modeling and control strategies have focused on layout design and maximizing wind power output. However, transitioning into the role of a major power system supplier necessitates new models and control designs that enable wind farms to provide the grid services that are often required of conventional generators. This talk introduces a model-based wind farm control approach for tracking a time-varying power signal, such as a power grid frequency regulation command. The underlying time-varying wake model extends commonly used static models to account for wake advection and lateral wake interactions. We perform numerical studies of the controlled wind farm using a large eddy simulation (LES) with actuator disks as a wind farm model with local turbine thrust coefficients (synthetic pitch) as the control actuation. Our results show that embedding this type of dynamic wake model within a model-based receding-horizon control framework leads to a controlled wind farm that qualifies to participate in markets for correcting short-term imbalances in active power generation and load on the power grid (frequency regulation). Accounting for the aerodynamic interactions between turbines within the proposed control strategy yields large increases in efficiency over prevailing approaches by achieving commensurate up-regulation with smaller derates (reductions in wind farm power set points). This potential for derate reduction has important economic implications because smaller derates directly correspond to reductions in the loss of bulk power revenue associated with participating in regulation markets.


Dennice F. Gayme is an assistant professor in mechanical engineering and the Carol Croft Linde Faculty Scholar at the Johns Hopkins University. She earned her bachelor's degree from McMaster University in 1997 and a master's degree from the UC Berkeley in 1998, both in mechanical engineering. She received her doctorate in control and dynamical systems in 2010 from the California Institute of Technology. Her research interests are in modeling, analysis and control for spatially distributed and large-scale networked systems in applications such as wall-bounded turbulent flows, wind farms, power grids and vehicular networks. She was a recipient of the JHU Catalyst Award in 2015, a 2017 ONR Young Investigator award, and an NSF CAREER award in 2017.

Time and place: Wednesday, March 14, 2018
Time: 11-12 noon
Seminar Room 3008 - Calit2

Towards Computationally Scalable Vision-Based Navigation

Patricio Vela, Georgia Tech


Navigation by autonomous vehicles or other forms of unmanned autonomous systems is a rapidly developing area within Robotics. Advances in technology and manufacturing mean that it is possible to deploy robots that span a couple orders of magnitude in size and available on-board computation. Our lab is interested in identifying a common vision-based navigation framework that can scale across the diversity of autonomous platforms envisioned by roboticists. Central to this vision is a means to minimally process visual information while maximally extracting task relevant information. We propose to employ a perception space representation which aligns with Marr’s 2.5D sketch, and to integrate it with best practice solutions in the perceive-plan-act robotics pipeline. Furthermore, we explore how learning-based strategies can provide constant-time outputs compatible with this pipeline. In achieving both objectives we can approach the goal of realizing computationally scalable visual navigation.


Patricio A. Vela is an associate professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. Dr. Vela's research focuses on geometric perspectives to control theory and computer vision, particularly how concepts from control and dynamical systems theory can serve to improve computer vision algorithms used in the decision-loop. More recent efforts expanding his research program involve studying the role of machine learning in adaptive control and autonomous robotics, and investigating how modern advances in adaptive and optimal control theory may improve locomotion effectiveness for biologically-inspired robotics. These efforts support a broad program to understand important research challenges associated with autonomous robotic operation in uncertain environments. Dr. Vela received a B.S. and a Ph.D. from the California Institute of Technology.

Time and place: Wednesday, March 7, 2018,1:30-2:30pm
EH 2430 (Colloquia Room)

Electronic Traps for Mechanical Waves: A framework for piezo-enabled tunability of elastic waves

Stefano Gonella, University of Minnesota


One of the main challenges in the design of versatile engineering devices is achieving tunability, i.e., the ability to tune a system's response to an evolving operating environment. In the context of vibration control, for example, this can lead to the design of semi-active mechanical filters. The opportunities are even broader in the realm of wave control, where one can engineer or boost the spectral and spatial wave manipulation capabilities of a mechanical system. The piezoelectric route to tunable structures relies on the use of piezoelectric elements to actively modify their inherent mechanical response. Of particular interest are methods involving shunts, whereby piezoelectric patches are passively connected to appropriately designed electric circuits, to yield a modification of the effective properties of the material and a correction of the global behavior of the host medium. This presentation focuses on the special class of resistive-inductive (RL) circuits, which de facto act as electro-mechanical resonators. When properly tuned, these resonators interact with a propagating wavefield by selectively distilling one or more frequencies from the signal and consequently attenuating and distorting the wave. Heterogeneous configurations involving multiple populations of resonators can be realized according to a plethora of (possibly random) spatial arrangements to achieve polychromatic and broadband effects, de facto behaving as tunable electromechanical rainbow materials. In two- dimensional lattice domains, the same paradigm can be used to actively manipulate the inherent frequency-selective spatial patterns exhibited by propagating wavefields. This approach ultimately results in the design of programmable structures and materials that can be used as tunable filters, mechanical signal jammers and directional actuators and sensors.


Stefano Gonella received Ph.D. and M.S. degrees in aerospace engineering from the Georgia Institute of Technology in 2007 and 2005, respectively. Previously, he received a Laurea, also in aerospace engineering, from Politecnico di Torino, Italy, in 2003. He joined the University of Minnesota in 2010, after 3 years of post-doctoral experience at Northwestern University. His research interests revolve around the modeling, simulation and experimental reconstruction of wave phenomena in complex structures and materials, with emphasis on smart cellular solids, phononic crystals and programmable acoustic metamaterials. He is also interested in the development of methodologies for structural and material diagnostics through the mechanistic adaptation of concepts of machine learning and computer vision.

Time and place: Friday, February 9, 2018 - 10:30 a.m.
McDonnell Douglas Engineering Auditorium (MDEA)

Active and passive control: from thought experiment to Formula One racing

Malcolm C. Smith, University of Cambridge


The talk will explain the origins and applications of a new mechanical device (the “inerter”) which rapidly became a standard component in Formula One Racing and IndyCars and is now being considered for applications from railway suspensions to buildings. The origins of the idea in a seemingly innocent question in mathematical control theory in the author's research will be described. The broader context of the idea - namely the close link between control and network synthesis, and the re-opening of research in classical circuits - will be described in a tutorial manner. The talk will conclude with an account of the manner in which the inerter entered the popular press by reason of a Formula One “spy scandal”.


Malcolm C. Smith is Professor of Control Engineering and Head of the Control Group in the Department of Engineering at the University of Cambridge. His research interests are in the areas of robust control, nonlinear systems, electrical and mechanical networks, and automotive applications. He is well-known for his invention of the inerter mechanical device currently used in Formula One motor racing and elsewhere. He received degrees in mathematics and control engineering from Cambridge University, England. He was subsequently a Research Fellow at the German Aerospace Centre, Oberpfaffenhofen, a Visiting Assistant Professor and Research Fellow with the Department of Electrical Engineering at McGill University, Montreal, and an Assistant Professor with the Department of Electrical Engineering, Ohio State University, Columbus, before returning to Cambridge in 1990 as a Lecturer in Engineering. Professor Smith is a Fellow of the IEEE and the Royal Academy of Engineering. He received the 1992 and 1999 George Axelby Best Paper Awards, in the IEEE Transactions on Automatic Control, both times for joint work with T.T. Georgiou. He received the 2009 Sir Harold Hartley Medal of the Institute of Measurement and Control for outstanding contributions to the technology of measurement and control.

Time and place: Friday, February 2, 2018 - 10:30 a.m.
McDonnell Douglas Engineering Auditorium (MDEA)

Scott Moura
Professor, Director of eCAL, University of California, Berkeley

Identification, Control and Fault Diagnostics of PDE Battery Electrochemistry Models

Abstract Batteries are ubiquitous. However, today’s batteries are expensive, range-limited, power-restricted, too quick to die, too slow to charge, and susceptible to safety issues. For this reason, model-based battery management systems (BMS) are of extreme interest. In this talk, we discuss eCAL’s recent research with electrochemical-based BMS, which are modeled by nonlinear partial differential equations (PDEs). Specifically, we discuss optimal experiment design for parameter identification, optimal safe-fast charging control, and fault diagnostics. Finally, we close with exciting new perspectives for next-generation battery systems.

Bio Scott Moura is an assistant professor in civil and environmental engineering at UC Berkeley and director of eCAL. He received his doctorate from the University of Michigan in 2011, a master's degree from the University of Michigan in 2008, and a bachelor's degree from UC Berkeley in 2006 - all in mechanical engineering. He was a postdoctoral scholar at UC San Diego in the Cymer Center for Control Systems and Dynamics and a visiting researcher in the Centre Automatique et Systèmes at MINES ParisTech in Paris, France. He is a recipient of the O. Hugo Shuck Best Paper Award, Carol D. Soc Distinguished Graduate Student Mentoring Award, Hellman Faculty Fellows Award, UC Presidential Postdoctoral Fellowship, National Science Foundation Graduate Research Fellowship, University of Michigan Distinguished ProQuest Dissertation Honorable Mention, University of Michigan Rackham Merit Fellowship and Distinguished Leadership Award. Moura has received multiple conference best paper awards, as an advisor and student. His research interests include control and estimation theory for PDEs, optimization, machine learning, batteries, electric vehicles and the smart grid.

Time and place: Friday, February 26, 2018 - 2:00-3:00pm
MAE Conference Hall, Engineering Gateway, 4th floor

Izchak Lewkowicz
Professor, Electrical Engineering
Ben Gurion University Be’er Sheva, Israel

Dissipative Systems & Convex Inverible Cones

Abstract This is an ongoing research for quite a few years. It focuses on the fact that physical dissipativity (continuous-time) and the mathematical notion of Convex Invertible Cones, are intimately related. In this talk, we try to substantiate this observation and as time permits, to point at applications.

Tuesday, November 7, 2017
Jordan Berg
NSF, CMMI Program Director
Professor and Co-Director of Nano Tech Center
Mechanical Engineering
Texas Tech University

Vibrational Control, Stability Maps, and Averaging

Time and place: Tuesday, November 7, 2017 - 10:30 a.m
CALIT2 Auditorium

Abstract Most control systems engineers would say that feedback is required to stabilize an unstable plant. However it has been long known that an appropriately designed open-loop periodic input can also modify stability. This technique, sometimes called “vibrational control,” is a unique control method that may succeed where conventional feedback is infeasible. However, vibrational control has not had widespread success in applications since it was introduced in the 1980’s. In this talk we will attempt to show that this is in part because vibrational control has largely been studied by the control community in the context of averaging theory, where the frequency of the stabilizing signal is not known in advance, but is required to be “sufficiently large.” This talk presents vibrational control in the context of the classical Hill’s equation, which allows some important limitations of standard averaging methods to be clearly observed. This talk proposes an alternative framework for vibrational control design and analysis, based on the stability map. This approach has been successful outside of the control community for design of quadrupole mass filters and quadrupole ion traps.

Bio: Jordan M. Berg received the BSE and MSE degrees in Mechanical and Aerospace Engineering from Princeton University in 1981 and 1984. He worked in the Attitude Control Analysis group at RCA Astro-Electronics in East Windsor, NJ, from 1983 to 1986. He received the PhD in Mechanical Engineering and Mechanics, and the MS in Mathematics and Computer Science from Drexel University in 1992. He held postdoctoral appointments at the Air Force Research Labs in Dayton, OH, and the Institute for Mathematics and Its Applications in Minneapolis, MN. Since 1996 he has been at Texas Tech University, where he is Professor of Mechanical Engineering and Co-Director of the Nano Tech Center. As a Fulbright Scholar in 2008 he held visiting faculty appointments at the University of Ruhuna and University of Peradeniya in Sri Lanka. He is a Professional Engineer in the State of Texas and a Fellow of the ASME. His research interests include nonlinear and geometric control, and the modeling, simulation, design, and control of nano- and microsystems. In 2014 he joined the Civil, Mechanical, and Manufacturing Innovation Division of the Engineering Directorate of the National Science Foundation under an IPA agreement. He currently serves as a Program Officer for the Dynamics, Control and Systems Diagnostics program, the National Robotics Initiative, and the C3 Soft Robotics EFRI topic.

Friday, November 3, 2017
Mario A. Rotea
Professor and Erik Jonsson Chair
Department Head of Mechanical Engineering
University of Texas, Dallas

Optimization and Control of Wind Energy Systems

Time and place: Friday, November 3, 2017 - 10:30 a.m
McDonnell Douglas Engineering Auditorium (MDEA)

Abstract Wind technology is a major player in utility-scale renewable energy for the production of electricity around the globe. Many countries share the strategic goal of increasing the penetration of wind energy into the electric grids. In the U.S. alone, the goal is to increase from 82 gigawatts GW of wind power, supplying 5-6 percent of the electricity demand in 2016, to 400 GW of wind power contributing 35 percent of the electricity by 2050. Attaining this goal would require a continued decrease in the cost of wind power. Arguably, advanced modeling and simulation, flow monitoring and advanced controls are key to reducing the cost of wind energy. This talk will provide an overview of the work being done at the University of Texas, Dallas, in these areas. It will show how the convergence of high-fidelity simulations, reduced-order models, field measurements (blending LiDAR technology with SCADA and met tower data) and advanced controls may yield increases in annual energy production and reliability of wind turbines and wind farms, which are important factors in reducing the cost of wind energy.

Bio: Mario A. Rotea is the holder of the Erik Jonsson Chair in engineering and computer science at the University of Texas, Dallas, where he is also the department head of mechanical engineering. Rotea spent 17 years at Purdue University as a professor of aeronautics and astronautics, developing and teaching methods for the analysis and design of control systems. He also worked for the United Technologies Research Center as senior research engineer on advanced control systems for helicopters, gas turbines and machine tools. Rotea was the head of the Mechanical and Industrial Engineering Department at the University of Massachusetts, Amherst, where he expanded the department in the area of wind energy and applications of industrial engineering to the health care sector. His career includes terms as director of the control systems program and division director of engineering education and centers at the National Science Foundation. Rotea is cofounder of WindSTAR, an NSF Industry-University Cooperative Research Center aimed at bringing together academia and industry to advance wind energy through industry-relevant research and education. Rotea joined UT Dallas in 2009 to serve as professor and inaugural head of the then newly created mechanical engineering department. He directed the department’s rapid growth, increasing student enrollment from 10 students to more than 1,100 in 2017. Rotea is a fellow of the IEEE for contributions to robust and optimal control of multivariable systems. Rotea graduated with a degree in electronic engineering from the University of Rosario. He received a master’s degree in electrical engineering and his doctorate in control science and dynamical systems from the University of Minnesota.

October 17, 2017
Bin Hu
University of Wisconsin, Madison

Dissipativity theory for optimization and machine learning research

Time and place: October 17, 2017, 11:30am
McDonnell Douglas Engineering Auditorium (MDEA)

Abstract Empirical risk minimization (ERM) is a central topic for machine learning research, and is typically solved using first-order optimization methods whose convergence proofs are derived in a case-by-case manner. In this talk, we will present a simple routine which unifies the analysis of such optimization methods including gradient descent method, Nesterov's accelerated method, stochastic gradient descent (SGD), stochastic average gradient (SAG), SAGA, Finito, stochastic dual coordinate ascent (SDCA), stochastic variance reduction gradient (SVRG), and SGD with momentum. Specifically, we will view all these optimization methods as dynamical systems and then use a unified dissipativity approach to derive sufficient conditions for convergence rate certifications of such dynamical systems.The derived conditions are all in the form of linear matrix inequalities (LMIs). We solve these resultant LMIs and obtain analytical proofs of new convergence rates for various optimization methods (with or without individual convexity). Our proposed analysis can be automated for a large class of first-order optimization methods under various assumptions. In addition, the derived LMIs can always be numerically solved to provide clues for constructions of analytical proofs

Bio: Bin Hu received his B.Sc in Theoretical and Applied Mechanics from the University of Science and Technology of China, and received the M.S. in Computational Mechanics from Carnegie Mellon University. He received the Ph.D in Aerospace Engineering and Mechanics at the University of Minnesota, advised by Peter Seiler. He is currently a postdoctoral researcher in the optimization group of Wisconsin Institute for Discovery at the University of Wisconsin-Madison. He is working with Laurent Lessard and closely collaborating with Stephen Wright. He is interested in building connections between control theory and machine learning research. His current research focuses on tailoring robust control theory (integral quadratic constraints, dissipation inequalities, jump system theory, etc) to unify the study of stochastic optimization methods (stochastic gradient, stochastic average gradient, SAGA, SVRG, Katyusha momentum, etc) and their applications in related machine learning problems (logistic regression, deep neural networks, matrix completion, etc).

May 19, 2017
Mihailo Jovanovic
Ming Hsieh Department of Electrical Engineering
Director, Center for Systems and Control
Viterbi School of Engineering
University of Southern California

Controller Architectures: Tradeoffs between Performance and Complexity

This talk describes the design of controller architectures that achieve a desired tradeoff between performance of distributed systems and controller complexity. Our methodology consists of two steps. First, we design controller architecture by incorporating regularization functions into the optimal control problem and, second, we optimize the controller over the identified architecture. For large-scale networks of dynamical systems, the desired structural property is captured by limited information exchange between physical and controller layers and the regularization term penalizes the number of communication links. In the first step, the controller architecture is designed using a customized proximal augmented Lagrangian algorithm. This method exploits separability of the sparsity-promoting regularization terms and transforms the augmented Lagrangian into a form that is continuously differentiable and can be efficiently minimized using a variety of methods. Although structured optimal control problems are, in general, nonconvex, we identify classes of convex problems that arise in the design of symmetric systems, undirected consensus and synchronization networks, optimal selection of sensors and actuators, and decentralized control of positive systems. Examples are provided to demonstrate the effectiveness of the framework.

Time and place: May 19, 2017, 2:00pm
McDonnell Douglas Engineering Auditorium (MDEA)

May 10, 2017
Christian Grussler
Lund University

Low-Rank Inducing Norms with Optimality Interpretations

This talk is on optimization problems which are convex apart from a sparsity/rank constraint. These problems are often found in the context of compressed sensing, linear regression, matrix completion, low-rank approximation and many more. Today, one of the most widely used methods for solving these problems is so-called nuclear norm regularization. Despite the nice probabilistic guarantees of this method, this approach often fails for problems with structural constraints.

In this talk, we will present an alternative by introducing the family of so-called low-rank inducing norms as convexifiers. Each norm is the convex envelope of a unitarily invariant norm plus a rank constraint. Therefore, they have several interesting properties, which will be discussed throughout the talk. They:

i. Give a simple deterministic test if the solution to the convexified problem is a solution to a specific non-convex problem.

ii. Often finds solutions where the nuclear norm fails to give low-rank solutions.

iii. Allow us to analyze the convergence of non-convex proximal splitting algorithms with convex analysis tools.

iv. Provide a more efficient regularization than the traditional scalar multiplication of the nuclear norm.

v. Leads to a different interpretation of the nuclear norm than the one that is traditionally presented.

In particular, all the results can be generalized to so-called atomic norms.

Time and place: May 10, 2017, 11:00am
EG 3161

January 19, 2017
Dr. Ge Chen (
Academy of Math & System Science, Chinese Academy of Sciences Beijing 100190, P.R. China

Analysis and control on collective behavior of some random complex systems

Abstract: Complex systems exist almost everywhere in nature, and human social and economic systems, and so have generated considerable interest in researchers from various fields. A central issue of complex system study is an understanding of how local interactions among the elements lead to collective behavior of the whole group. However, the analysis of complex systems is usually difficult and the existing almost unique method is to construct a Lyapunov function. We first based on the Lyapunov method, put forward some quantitative analysis and optimization methods which applied to some biological and engineering systems. Also, we broke through the Lyapunov method and proposed a new analysis method which applied to some biological and social systems.

Time and place: May 19, 2017, 2:00pm
McDonnell Douglas Engineering Auditorium (MDEA)

November 4, 2016
Professor Brad Paden
University of California, Santa Barbara, and LaunchPoint Technologies Inc.

Adventures in Mechatronics

This talk aims to illustrate the creativity, challenge, and professional enjoyment associated with the invention, design, and control of mechatronic systems. Example systems include a life-saving locomotive bumper, a magnetically-coupled MEMS sensor, an electromagnetic launch system, an energy storage system, a magnetic bearing system, a pediatric maglev artificial heart, a guided-catheter system, and a high-speed switching mechanism. While modeling, control and optimization are essential ingredients in mechatronic systems, the large design and application spaces of mechatronic systems compel us to place a high value on innovation at the level of system architectures - this point is illustrated throughout the talk.

Time and place: November 4, 2016, 10:30am
McDonnell Douglas Engineering Auditorium (MDEA)

September 13, 2016
Professor Na Li
Harvard University

Distributed Energy Management with Limited Communication

A major issue in future power grids is how intelligent devices and independent producers can respectively change their power consumption-production to achieve near maximum efficiency for the power network. Limited communications between devices and producers necessitates an approach where the elements of the network can act in an autonomous manner with limited information-communications yet achieve near optimal performance. In this talk, I will present our recent work on distributed energy management with limited communication. In particular, I will show how we can extract information from physical measurements and recover information from local computation. We will also investigate the minimum amount of communication for achieving the optimal energy management and study how limited communication affects the convergence rate of the distributed algorithms. We will conclude the talk with a discussion on challenges and opportunities on distributed optimization and control for future grids.

Time and place: Tuesday, September 13, 2016 2:00pm-3:00pm
McDonnell Douglas Engineering Auditorium (MDEA)