Chapter eight

Predictive control

section seven: THE USE OF TARGET INFORMATION WITHIN PREDICTIVE CONTROL

The earlier chapters developed predictive control algorithms which assumed the future target was a constant. However, a well publicised advantage of MPC is that it should be able to make systematic use of advance information about the target, or indeed disturbances, using feed forward action. This chapter investigates the claim and demonstrates some worrying features.

As a general rule, MPC algorithms do not use future target information well and often doing so makes performance worse and not better. One can improve the use of future target information using some rather simple pragmatic guidelines. It is not the job of this book to go further and indeed systematic solutions are still rather under discussed in the literature.

As usual, elementary MATLAB code is provided on the Google Site so that viewers can re-run and modify the examples given in the videos.

1. The feedforward term

Gives an overview of how the target information is absorbed into a GPC control law and thus impacts open the choice of control move. Gives a few MATLAB simulations which demonstrate the default solution often leads to rather poor performance, indeed much poorer than would result from assuming no feedforward information.

Watch a video talk through

2. Understanding the feedforward term

Gives a simple insight into how a finite horizon MPC algorithm uses feedforward information and thus demonstrates that this leads to poorly posed optimisations when the input horizon is much lower than the output horizon.

Watch a video talk through

3. Utilising feedforward in GPC

Builds on the insight in the previous section to show how varying the amount of advance information available to the feed forward has significant impacts on performance and the definition of the underlying optimisation. Indicates that typically one can usefully use some advance information of the target but not too much.

Watch a video talk through

4. Feedforward selection by trial and error

Building on the previous section, this video demonstrates that offline trial and error is a simple tool for establishing how much advance information can be usefully used by the feed forward. Critically is noted that no generic guidance exists as the best number varies with system dynamics, horizons and weights.

Watch a video talk through

5. Modifying the feedforward

This video is something of an aside and for completeness notes that one can change the parameterisation of the degrees of freedom within an MPC algorithm and this can lead to significant changes in performance and the nature of the optimisation. However such techniques have yet to reach maturity in the literature so no firm conclusions are given here.

Watch a video talk through

6. Feedforward with dual-mode approaches

This video gives a brief introduction to feed forward within dual mode control and indicates that largely this issue has been ignored in the mainstream literature, partially as the algebra is messy and thus often unhelpful.

This video derives the algebra required to make use of advance target information within a dual-mode algorithm. This video uses an autonomous model approach, although alternatives are possible and specifically incorporates offset free tracking in the steady-state. Shows how the future target information enters the control law through a feed forward term exactly analogous to that in a finite horizon algorithm.

This video gives two investigations by MATLAB. First it illustrates the numerical values for the feed forward term and shows that for OMPC this has some notable properties that can be used for error catching (and interest). Secondly it demonstrates that similar insights to those given in the early videos also apply to OMPC, that is some advance information can improve performance but too much can be counter productive.