Data-Driven Advanced Control of Nonlinear Systems: A Neural Network Approach

Silvio Simani, University of Ferrara, Department of Engineering

Author Profile

Summary

The proposed activity can be considered a specific skill of a more general and advanced course of nonlinear control for complex systems, as it proposes the use and the design of advanced tools for the control of dynamic processes by considering nonlinear dynamic processes from their inputs and outputs. The designed data-driven controllers are based on neural networks that will be trained by using the input-output data acquired from the closed loop process.
Therefore the main goals of the course consist of introducing advanced strategies and tools for the analysis and control of complex dynamic systems. As by-products, the activity will introduce the structure of artificial neural networks (both static and quasi-static), their basic elements, their learning capabilities, and their training procedures. The rationale of their use for design data-driven neural controllers when included in closed loop scheme will be addressed. The extensive use of hands-on and guided experiences will enable to challenging features of neural network structures, their learning capabilities and their automatic training from data by using the MATLAB and Simulink environments.

Share your modifications and improvements to this activity through the Community Contribution Tool »

Learning Goals

The main acquired knowledge will be:

  • knowledge related to the analysis of nonlinear dynamic systems in steady and transient states and their advanced simulation tools;
  • basic knowledge of nonlinear dynamic system software simulation tools;
  • fundamentals about neural networks and nonlinear methods for control, basics of optimization methods and tools;
  • elements of a neural network, the linear adaptive neuron, training mechanism of a single neuron and neural networks;
  • fundamentals of simulation and control for nonlinear dynamic systems.

These topics will be delivered via online teaching, but the use of MATLAB and Simulink enhances the learning of these theoretical issues.

The basic acquired skills (i.e. the capacity of applying the acquired knowledge) will be:

  • analysis of the behaviour of nonlinear systems in steady and dynamic conditions;
  • design of nonlinear controller for a given nonlinear dynamic system in order to meet proper transient and steady state constraints;
  • design the most suitable control solutions by means of neural networks;
  • training of neural networks for control purpose;
  • identification of the most suitable nonlinear elements, as well as the most suitable parameters for a specific control design and its application;
  • use of simulation numerical programs to analyse nonlinear systems.

MATLAB represents the key point of this activity, as it enhances the design of closed loop control schemes (Simulink); moreover, the exploited control strategies using tools borrowed by the Artificial Intelligence framework help to understand the learning of a single neuron and its basic capabilities; on the other hand, they enhance to understand the training procedure for the neural network when learning from input-output data; they provide the tools for the complete implementation of the design of the neural network controller in a closed-loop scheme. Therefore, this activity helps to develop and improve higher-order thinking skills such as computation, data analysis, and complex model development. Other 'soft skills' will be enhanced by the teaching strategy, and in particular the use of flipped classes.

Context for Use

This activity is intended for MSc undergraduate students (first or second years of their MSc degree). It is suggested to deliver this activity by using flipped classes: fundamentals of advanced control and their basic tools will be addressed via online teaching, whilst hands-on and practical experiences with PCs will be held in the lab, since they represent the key point to enable the students to acquire fundamental and improve soft skills, such as discussion and critical capabilities.
The activity requires about 15 hours of classroom activity (online teaching) and 4 hours of laboratory experiences (in person classes) by means of hands-on and guided problem-solving projects.
It is assumed that the students are familiar with concepts already acquired by attending basic courses of Fundamentals of Informatics, Fundamentals of Automatic Control, Automatic Control and Digital Control Systems. Moreover, the following basic concepts are required: Mathematics, differential, integral computation; Physics; dynamic systems, their behaviour, and their practical application; methods to analyse dynamic systems in steady and transient states; ability to analyse and design digital systems.
The proposed activity can be a specific issue of a general course, for example, of Nonlinear Control or Adaptive Control, which could be suitable for Electronics, Informatics and Mechanics Engineering. Moreover, the students should be familiar with the basic concepts of Matlab and Simulink, which are assumed to be already acquired from fundamental courses at the second and third years of their BSc studies.

Description and Teaching Materials

The teaching material will cover the following issues:

  • Introduction to neural network
  • Issues in Neural network
  • Simple Neural Networks: Perceptron and Adaptive Linear Neurons
  • Multilayer Perceptron: Basics
  • Genetic Algorithm
  • Radial Basis Networks
  • Application Examples

The following slides will be proposed and exploited during online teaching (in English).

http://www.silviosimani.it/lecture-NN-2010.pdf

Other bibliographical references for further readings are also suggested:

  • Neural Networks for Identification, Prediction, and Control, by Duc Truong Pham and Xing Liu. Springer Verlag; (December 1995). ISBN: 3540199594.
  • Nonlinear Identification and Control: A Neural Network Approach, by G. P. Liu. Springer Verlag; (October 2001). ISBN: 1852333421.
  • Multi-Objective Optimization using Evolutionary Algorithms, by Deb Kalyanmoy. John Wiley & Sons, Ltd, Chichester, England, 2001.
  • NONLINEAR VIRTUAL REFERENCE FEEDBACK TUNING: Application of Neural Networks to Direct Controller Design. Document in PDF format (346KB): http://www.silviosimani.it/neural_controller.pdf

Note that MATLAB is best suited for neural network training. Automatic learning of neural networks is a technique that is obtaining a foothold beyond multiple disciplines – enabling for example self-driving cars, predictive fault monitoring and predictive maintenance of dynamic processes, as well as time series forecasting in the economic markets and other use cases. MATLAB takes less lines of code and builds machine learning and different learning models, without needing to be a specialist in the required techniques, such as optimisation algorithms. MATLAB provides the ideal environment for neural network learning, through to model training and deployment; on the other hand, Simulink allows for the design of the closed loop scheme that contains the neural controller and the mathematical representation of the controlled process.

The following aspects represent the fundamental features of MATLAB, which make it suitable to be successfully used for the design of neural controllers.

  • Matlab gives scope for pre-processing datasets actively with domain-specific apps for data from different domains. Users can visualize, check, and mend problems before training the neural network to build complex network architectures or modify trained networks for transfer learning.
  • Matlab can use deep learning models everywhere including parallel computing platforms, C code, enterprise systems, or the cloud. It gives a great performance where a user can produce code that supports optimised libraries for different microprocessor architectures, in order to build deployable patterns with high-performance inference activity.
  • Neural Network and Learning Toolboxes implements a framework for composing and performing different neural network architectures with algorithms, trained models, and applications. A user can apply static, quasi-static and dynamic neural networks to provide classification and regression on different sets of data. Apps and plots support users to visualize activations, edit network architectures, and monitor preparation progress.

Once the neural network architecture is selected, back-propagation is a method implemented in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network. Back-propagation is stenotype for the backward propagation of errors since an error is calculated at the output and distributed backwards throughout the network's layers. It is commonly used to train neural networks. It is shown how the back-propagation strategy is used as it is a generalization of the delta rule to multi-layered feed-forward networks, which were made possible by applying the chain rule to iteratively compute gradients for each layer. It is intimately associated with the Gauss-Newton algorithm and is part of advancing research in neural back-propagation. Therefore, the use of MATLAB will enable the students to learn these key features of artificial neural networks, and how to implement these tools as control solutions in closed loop schemes.

Teaching Notes and Tips

It is assumed that the students are already familiar with linear control and basic control schemes. Therefore, the key aspect of this activity regards the training of neural networks, and their use as neural controller.

Fitting a neural network involves using a training dataset to update the model weights to create a good mapping of inputs to outputs. This training process is solved using an optimization algorithm that searches through a space of possible values for the neural network model weights for a set of weights that results in good performance on the training dataset. The proposed activity frames the challenge of training a neural network as an optimization problem.

After the development of this activity, the students will know:

  • Training a neural network involves using an optimization algorithm to find a set of weights to best map inputs to outputs.
  • The problem is hard, not least because the error surface is non-convex and contains local minima, flat spots, and is highly multidimensional.
  • The stochastic gradient descent algorithm is the best general algorithm to address this challenging problem.

Therefore, the key point is to present the learning as optimisation problem. This step can be performed once the students have clear the capabilities of as single neuron, the task that this single neuron can accomplish, and the organisation of single neurons into multi-layer structures.

Learning neural network models learn to map inputs to outputs given a training dataset of examples. The training process involves finding a set of weights in the network that proves to be good, or good enough, at solving the specific problem. This training process is iterative, meaning that it progresses step by step with small updates to the model weights each iteration and, in turn, a change in the performance of the model each iteration. The iterative training process of neural networks solves an optimization problem that finds for parameters (model weights) that result in a minimum error or loss when evaluating the examples in the training dataset. Optimization is a directed search procedure and the optimization problem that we wish to solve when training a neural network model is very challenging.

The best general algorithm known for solving this problem is stochastic gradient descent, where model weights are updated in each iteration using the back-propagation of error algorithm. An optimization process can be understood conceptually as a search through a landscape for a candidate solution that is sufficiently satisfactory. A point on the landscape is a specific set of weights for the model, and the elevation of that point is an evaluation of the set of weights, where valleys represent good models with small values of loss. This is a common conceptualisation of optimization problems and the landscape is referred to as an "error surface." The optimization algorithm iteratively steps across this landscape, updating the weights and seeking out good or low elevation areas. For simple optimization problems, the shape of the landscape is a big bowl and finding the bottom is easy, so easy that very efficient algorithms can be designed to find the best solution. These types of optimization problems are referred to mathematically as convex.

The error surface we wish to navigate when optimising the weights of a neural network is not a bowl shape. It is a landscape with many hills and valleys. These types of optimization problems are referred to mathematically as non-convex.

The challenging nature of optimization problems to be solved when using learning neural networks has implications when training models in practice. In general, (stochastic) gradient descent is the best algorithm available, and this algorithm makes no guarantees. At the end of this activity, and when the students will have designed the neural network for controlling a dynamic process using the Simulink tool, the following aspects will be clear.

- Possibly Questionable Solution Quality. The optimization process may or may not find a good solution and solutions can only be compared relatively, due to deceptive local minima.
- Possibly Long Training Time. The optimization process may take a long time to find a satisfactory solution, due to the iterative nature of the search.
- Possible Failure. The optimization process may fail to progress (get stuck) or fail to locate a viable solution, due to the presence of flat regions.

The task of effective training is to carefully configure, test, and tune the parameters of the model and the learning process itself to best address this challenge.

Thankfully, Matlab and Simulink environment can dramatically simplify the search space and accelerate the search process, often discovering models much larger and with better performance than previously thought possible.


Assessment

A final exam will verify at which level the learning objectives previously described have been acquired. The examination consists of two phases.

  • A Simulink project regarding the simulation and the control design for a nonlinear system by using the Matlab and Simulink environments, which aims at understanding if the student has the skills in the analysis and the synthesis of a complex process.
  • One test (with open and multiple choice questions) on the basic concepts of the activity, with the aim of evaluating how deeply the student has studied the subject and how he is able to understand the topics analysed.

References and Resources

The page that discusses the specific activity is available at the link:

http://www.silviosimani.it/lessons29.html

In particular, the following resources support students using the activity

  • List of demos for the PERCEPTRON neural network example: "demop1", classification for a 2-input perceptron; "demop6", linearly non-separable input vectors; and selected from "nndtoc": "nnd3pc", perceptron classification - fruit example; "nnd4db", perceptron decision boundary; "nnd4pr" perceptron rule.
  • Examples taken from Matlab Exchange Files Web Site: (i) implementation of a two-layers two-neurons network, (ii) multi-layer perceptron training with variable learning rate, (iii) character recognition GUI. (http://www.silviosimani.it/NN_examples01.zip)
  • Three examples of radial basis function (RBF) neural network taken from the Neural Network Design Table of Contents ("nndtoc", Chapter 11): "demorb1", "demorb3", and "demorb4", with different types and number of radial basis functions. 
  • Examples of nonlinear models and neural network training. Zipped Matlab and Simulink directories (http://www.silviosimani.it/nn_fuzzy_examples02.zip).

Moreover, hands-on Matlab and Simulink exercises that should help the student to learn the design phases are available at the link:

http://www.silviosimani.it/lessons42.html

In particular, the following examples are considered:

  • Design Example of a Radial Basis Function Neural Network. Application to the example from: "Neural Networks for Pattern Recognition", C. M. Bishop, Oxford University Press, 1995.
  • Design of MLP neural networks for dynamic model estimation and residual generation. Example of nonlinear process model derivation. Implementation of the MLP design and NN training.