Iterative Learning Control (eBook)

An Optimization Paradigm
eBook Download: PDF
2015 | 1st ed. 2016
XXVIII, 456 Seiten
Springer London (Verlag)
978-1-4471-6772-3 (ISBN)

Lese- und Medienproben

Iterative Learning Control -  David H. Owens
Systemvoraussetzungen
128,39 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

This book develops a coherent theoretical approach to algorithm design for iterative learning control based on the use of optimization concepts. Concentrating initially on linear, discrete-time systems, the author gives the reader access to theories based on either signal or parameter optimization. Although the two approaches are shown to be related in a formal mathematical sense, the text presents them separately because their relevant algorithm design issues are distinct and give rise to different performance capabilities.

Together with algorithm design, the text demonstrates that there are new algorithms that are capable of incorporating input and output constraints, enable the algorithm to reconfigure systematically in order to meet the requirements of different reference signals and also to support new algorithms for local convergence of nonlinear iterative control. Simulation and application studies are used to illustrate algorithm properties and performance in systems like gantry robots and other electromechanical and/or mechanical systems.

 Iterative Learning Control will interest academics and graduate students working in control who will find it a useful reference to the current status of a powerful and increasingly popular method of control. The depth of background theory and links to practical systems will be of use to engineers responsible for precision repetitive processes.



Professor Owens has 40 years of experience of Control Engineering theory and applications in areas including nuclear power, robotics and mechanical test. He has extensive teaching experience at both undergraduate and postgraduate levels in three UK universities. His research has included multivariable frequency domain theory and design, the theory of multivariable root loci, contributions to robust control theory, theoretical methods for controller design based  on plant step data and involvement in aspects of adaptive control, model reduction and optimization-based design. His area of research that specifically underpins the text is his experience of modelling and analysis of systems with repetitive dynamics. Originally arising in control of underground coal cutters, my theory of 'multipass processes' (developed in 1976 with follow-on applications introduced by J.B. Edwards) laid the foundation for analysis and design in this area and others including metal rolling and automated agriculture. This work led to substantial contributions (with collaborator E. Rogers and others) in the area of repetitive control systems (as part of 2D systems theory) but more specifically, since 1996, in the area of iterative learning control when I introduced the use of optimization to the ILC community in the form of 'norm optimal iterative learning control'. Since that time he has continued to teach and research in areas related to this topic adding considerable detail and depth to the approach and introducing the ideas of parameter optimal iterative learning to simplify the implementations. This led to his development of a wide range of new algorithms, supporting analysis and applications to mechanical test. This work is also being applied to the development of data analysis tools for control in gantry robots and stroke rehabilitation equipment by collaborators at Southampton University. Work with S. Daley has also seen applications in automative test at Jaguar and related industrial sites.
David Owens was elected a Fellow of the Royal Academy of Engineering for his contributions to knowledge in these and other areas.
This book develops a coherent and quite general theoretical approach to algorithm design for iterative learning control based on the use of operator representations and quadratic optimization concepts including the related ideas of inverse model control and gradient-based design.Using detailed examples taken from linear, discrete and continuous-time systems, the author gives the reader access to theories based on either signal or parameter optimization. Although the two approaches are shown to be related in a formal mathematical sense, the text presents them separately as their relevant algorithm design issues are distinct and give rise to different performance capabilities. Together with algorithm design, the text demonstrates the underlying robustness of the paradigm and also includes new control laws that are capable of incorporating input and output constraints, enable the algorithm to reconfigure systematically in order to meet the requirements of different reference and auxiliary signals and also to support new properties such as spectral annihilation. Iterative Learning Control will interest academics and graduate students working in control who will find it a useful reference to the current status of a powerful and increasingly popular method of control. The depth of background theory and links to practical systems will be of use to engineers responsible for precision repetitive processes.

Professor Owens has 40 years of experience of Control Engineering theory and applications in areas including nuclear power, robotics and mechanical test. He has extensive teaching experience at both undergraduate and postgraduate levels in three UK universities. His research has included multivariable frequency domain theory and design, the theory of multivariable root loci, contributions to robust control theory, theoretical methods for controller design based  on plant step data and involvement in aspects of adaptive control, model reduction and optimization-based design. His area of research that specifically underpins the text is his experience of modelling and analysis of systems with repetitive dynamics. Originally arising in control of underground coal cutters, my theory of “multipass processes” (developed in 1976 with follow-on applications introduced by J.B. Edwards) laid the foundation for analysis and design in this area and others including metal rolling and automated agriculture. This work led to substantial contributions (with collaborator E. Rogers and others) in the area of repetitive control systems (as part of 2D systems theory) but more specifically, since 1996, in the area of iterative learning control when I introduced the use of optimization to the ILC community in the form of “norm optimal iterative learning control”. Since that time he has continued to teach and research in areas related to this topic adding considerable detail and depth to the approach and introducing the ideas of parameter optimal iterative learning to simplify the implementations. This led to his development of a wide range of new algorithms, supporting analysis and applications to mechanical test. This work is also being applied to the development of data analysis tools for control in gantry robots and stroke rehabilitation equipment by collaborators at Southampton University. Work with S. Daley has also seen applications in automative test at Jaguar and related industrial sites. David Owens was elected a Fellow of the Royal Academy of Engineering for his contributions to knowledge in these and other areas.

Series Editors’ Foreword 7
Preface 10
Acknowledgments 17
Contents 19
1 Introduction 27
1.1 Control Systems, Models and Algorithms 28
1.2 Repetition and Iteration 29
1.2.1 Periodic Demand Signals 29
1.2.2 Repetitive Control and Multipass Systems 30
1.2.3 Iterative Control Examples 32
1.3 Dynamical Properties of Iteration: A Review of Ideas 35
1.4 So What Do We Need? 38
1.4.1 An Overview of Mathematical Techniques 39
1.4.2 The Conceptual Basis for Algorithms 41
1.5 Discussion and Further Background Reading 42
2 Mathematical Methods 44
2.1 Elements of Matrix Theory 44
2.2 Quadratic Optimization and Quadratic Forms 52
2.2.1 Completing the Square 52
2.2.2 Singular Values, Lagrangians and Matrix Norms 53
2.3 Banach Spaces, Operators, Norms and Convergent Sequences 54
2.3.1 Vector Spaces 54
2.3.2 Normed Spaces 56
2.3.3 Convergence, Closure, Completeness and Banach Spaces 58
2.3.4 Linear Operators and Dense Subsets 59
2.4 Hilbert Spaces 62
2.4.1 Inner Products and Norms 62
2.4.2 Norm and Weak Convergence 64
2.4.3 Adjoint and Self-adjoint Operators in Hilbert Space 66
2.5 Real Hilbert Spaces, Convex Sets and Projections 71
2.6 Optimal Control Problems in Hilbert Space 73
2.6.1 Proof by Completing the Square 75
2.6.2 Proof Using the Projection Theorem 76
2.6.3 Discussion 77
2.7 Further Discussion and Bibliography 78
3 State Space Models 80
3.1 Models of Continuous State Space Systems 82
3.1.1 Solution of the State Equations 83
3.1.2 The Convolution Operator and the Impulse Response 84
3.1.3 The System as an Operator Between Function Spaces 84
3.2 Laplace Transforms 85
3.3 Transfer Function Matrices, Poles, Zeros and Relative Degree 86
3.4 The System Frequency Response 88
3.5 Discrete Time, Sampled Data State Space Models 89
3.5.1 State Space Models as Difference Equations 89
3.5.2 Solution of Linear, Discrete Time State Equations 90
3.5.3 The Discrete Convolution Operator and the Discrete Impulse Response Sequence 91
3.6 mathcalZ-Transforms and the Discrete Transfer Function Matrix 92
3.6.1 Discrete Transfer Function Matrices, Poles, Zeros and the Relative Degree 93
3.6.2 The Discrete System Frequency Response 94
3.7 Multi-rate Discrete Time Systems 95
3.8 Controllability, Observability, Minimal Realizations and Pole Allocation 95
3.9 Inverse Systems 97
3.9.1 The Case of m=ell, Zeros and ?* 97
3.9.2 Left and Right Inverses When m neqell 99
3.10 Quadratic Optimal Control of Linear Continuous Systems 101
3.10.1 The Relevant Operators and Spaces 101
3.10.2 Computation of the Adjoint Operator 103
3.10.3 The Two Point Boundary Value Problem 106
3.10.4 The Riccati Equation and a State Feedback Plus Feedforward Representation 107
3.10.5 An Alternative Riccati Representation 109
3.11 Further Reading and Bibliography 110
4 Matrix Models, Supervectors and Discrete Systems 112
4.1 Supervectors and the Matrix Model 112
4.2 The Algebra of Series and Parallel Connections 113
4.3 The Transpose System and Time Reversal 114
4.4 Invertibility, Range and Relative Degrees 115
4.4.1 The Relative Degree and the Kernel and Range of G 117
4.4.2 The Range of G and Decoupling Theory 118
4.5 The Range and Kernel and the Use of the Inverse System 121
4.5.1 A Partition of the Inverse 121
4.5.2 Ensuring Stability of P-1(z) 123
4.6 The Range, Kernel and the mathcalC* Canonical Form 124
4.6.1 Factorization Using State Feedback and Output Injection 124
4.6.2 The mathcalC* Canonical Form 125
4.6.3 The Special Case of Uniform Rank Systems 127
4.7 Quadratic Optimal Control of Linear Discrete Systems 129
4.7.1 The Adjoint and the Discrete Two Point Boundary Value Problem 130
4.7.2 A State Feedback/Feedforward Solution 131
4.8 Frequency Domain Relationships 132
4.8.1 Bounding Norms on Finite Intervals 133
4.8.2 Computing the Norm Using the Frequency Response 134
4.8.3 Quadratic Forms and Positive Real Transfer Function Matrices 135
4.8.4 Frequency Dependent Lower Bounds 137
4.9 Discussion and Further Reading 141
5 Iterative Learning Control: A Formulation 143
5.1 Abstract Formulation of a Design Problem 143
5.1.1 The Design Problem 144
5.1.2 Input and Error Update Equations: The Linear Case 147
5.1.3 Robustness and Uncertainty Models 148
5.2 General Conditions for Convergence of Linear Iterations 152
5.2.1 Spectral Radius and Norm Conditions 153
5.2.2 Infinite Dimensions with r(L)="026B30D L"026B30D =1 and L=L* 156
5.2.3 Relaxation, Convergence and Robustness 158
5.2.4 Eigenstructure Interpretation 162
5.2.5 Formal Computation of the Eigenvalues and Eigenfunctions 163
5.3 Robustness, Positivity and Inverse Systems 165
5.4 Discussion and Further Reading 167
6 Control Using Inverse Model Algorithms 169
6.1 Inverse Model Control: A Benchmark Algorithm 169
6.1.1 Use of a Right Inverse of the Plant 169
6.1.2 Use of a Left Inverse of the Plant 171
6.1.3 Why the Inverse Model Is Important 173
6.1.4 Inverse Model Algorithms for State Space Models 175
6.1.5 Robustness Tests and Multiplicative Error Models 176
6.2 Frequency Domain Robustness Criteria 180
6.2.1 Discrete System Robust Monotonicity Tests 180
6.2.2 Improving Robustness Using Relaxation 182
6.2.3 Discrete Systems: Robustness and Non-monotonic Convergence 183
6.3 Discussion and Further Reading 185
7 Monotonicity and Gradient Algorithms 188
7.1 Steepest Descent: Achieving Minimum Energy Solutions 189
7.2 Application to Discrete Time State Space Systems 191
7.2.1 Algorithm Construction 192
7.2.2 Eigenstructure Interpretation: Convergence in Finite Iterations 194
7.2.3 Frequency Domain Attenuation 197
7.3 Steepest Descent for Continuous Time State Space Systems 201
7.4 Monotonic Evolution Using General Gradients 203
7.5 Discrete State Space Models Revisited 206
7.5.1 Gradients Using the Adjoint of a State Space System 206
7.5.2 Why the Case of m=ell May Be Important in Design 215
7.5.3 Robustness Tests in the Frequency Domain 217
7.5.4 Robustness and Relaxation 220
7.5.5 Non-monotonic Gradient-Based Control and ?-Weighted Norms 221
7.5.6 A Steepest Descent Algorithm Using ?-Norms 226
7.6 Discussion, Comments and Further Generalizations 226
7.6.1 Bringing the Ideas Together? 227
7.6.2 Factors Influencing Achievable Performance 229
7.6.3 Notes on Continuous State Space Systems 230
8 Combined Inverse and Gradient Based Design 231
8.1 Inverse Algorithms: Robustness and Bi-directional Filtering 231
8.2 General Issues in Design 235
8.2.1 Pre-conditioning Control Loops 236
8.2.2 Compensator Structures 238
8.2.3 Stable Inversion Algorithms 240
8.2.4 All-Pass Networks and Non-minimum-phase Systems 241
8.3 Gradients, Compensation and Feedback Design Methods 248
8.3.1 Feedback Design: The Discrete Time Case 249
8.3.2 Feedback Design: The Continuous Time Case 251
8.4 Discussion and Further Reading 251
9 Norm Optimal Iterative Learning Control 254
9.1 Problem Formulation and Formal Algorithm 255
9.1.1 The Choice of Objective Function 255
9.1.2 Relaxed Versions of NOILC 257
9.1.3 NOILC for Discrete-Time State Space Systems 259
9.1.4 Relaxed NOILC for Discrete-Time State Space Systems 261
9.1.5 A Note on Frequency Attenuation: The Discrete Time Case 262
9.1.6 NOILC: The Case of Continuous-Time State Space Systems 263
9.1.7 Convergence, Eigenstructure, ?2 and Spectral Bandwidth 265
9.1.8 Convergence: General Properties of NOILC Algorithms 269
9.2 Robustness of NOILC: Feedforward Implementation 273
9.2.1 Computational Aspects of Feedforward NOILC 274
9.2.2 The Case of Right Multiplicative Modelling Errors 275
9.2.3 Discrete State Space Systems with Right Multiplicative Errors 280
9.2.4 The Case of Left Multiplicative Modelling Errors 283
9.2.5 Discrete Systems with Left Multiplicative Modelling Errors 288
9.2.6 Monotonicity in mathcalY with Respect to the Norm "026B30D cdot"026B30D mathcalY 289
9.3 Non-minimum-phase Properties and Flat-Lining 290
9.4 Discussion and Further Reading 293
9.4.1 Background Comments 293
9.4.2 Practical Observations 294
9.4.3 Performance 295
9.4.4 Robustness and the Inverse Algorithm 295
9.4.5 Alternatives? 296
9.4.6 Q, R and Dyadic Expansions 297
10 NOILC: Natural Extensions 298
10.1 Filtering Using Input and Error Weighting 298
10.2 Multi-rate Sampled Discrete Time Systems 300
10.3 Initial Conditions as Control Signals 301
10.4 Problems with Several Objectives 305
10.5 Intermediate Point Problems 307
10.5.1 Continuous Time Systems: An Intermediate Point Problem 307
10.5.2 Discrete Time Systems: An Intermediate Point Problem 311
10.5.3 IPNOILC: Additional Issues and Robustness 311
10.6 Multi-task NOILC 314
10.6.1 Continuous State Space Systems 315
10.6.2 Adding Initial Conditions as Controls 320
10.6.3 Discrete State Space Systems 321
10.7 Multi-models and Predictive NOILC 322
10.7.1 Predictive NOILC---General Theory and a Link to Inversion 322
10.7.2 A Multi-model Representation 325
10.7.3 The Case of Linear, State Space Models 326
10.7.4 Convergence and Other Algorithm Properties 329
10.7.5 The Special Cases of M=2 and M=infty 334
10.7.6 A Note on Robustness of Feedforward Predictive NOILC 336
10.8 Discussion and Further Reading 340
11 Iteration and Auxiliary Optimization 343
11.1 Models with Auxiliary Variables and Problem Formulation 343
11.2 A Right Inverse Model Solution 345
11.3 Solutions Using Switching Algorithms 347
11.3.1 Switching Algorithm Construction 347
11.3.2 Properties of the Switching Algorithm 348
11.3.3 Characterization of Convergence Rates 351
11.3.4 Decoupling Minimum Energy Representations from NOILC 353
11.3.5 Intermediate Point Tracking and the Choice G1 = G 354
11.3.6 Restructuring the NOILC Spectrum by Choosing G1=Ge 355
11.4 A Note on Robustness of Switching Algorithms 358
11.5 The Switching Algorithm When GeGe* Is Invertible 361
11.6 Discussion and Further Reading 364
12 Iteration as Successive Projection 367
12.1 Convergence Versus Proximity 367
12.2 Successive Projection and Proximity Algorithms 369
12.3 Iterative Control with Constraints 374
12.3.1 NOILC with Input Constraints 375
12.3.2 General Analysis 378
12.3.3 Intermediate Point Control with Input and Output Constraints 382
12.3.4 Iterative Control to Satisfy Auxiliary Variable Bounds 384
12.3.5 An Overview and Summary 386
12.4 ``Iteration Management'' by Operator Intervention 387
12.5 What Happens If S1 and S2 Do Not Intersect? 390
12.6 Discussion and Further Reading 393
13 Acceleration and Successive Projection 396
13.1 Replacing Plant Iterations by Off-Line Iterations 397
13.2 Accelerating Algorithms Using Extrapolation 397
13.2.1 Successive Projection and Extrapolation Algorithms 398
13.2.2 NOILC: Acceleration Using Extrapolation 400
13.3 A Notch Algorithm Using Parameterized Sets 402
13.3.1 Creating a Spectral Notch: Computation and Properties 402
13.3.2 The Notch Algorithm and Iterative Control Using Successive Projection 408
13.3.3 A Notch Algorithm for Discrete State Space Systems 412
13.3.4 Robustness of the Notch Algorithm in Feedforward Form 415
13.4 Discussion and Further Reading 420
14 Parameter Optimal Iterative Control 422
14.1 Parameterizations and Norm Optimal Iteration 422
14.2 Parameter Optimal Control: The Single Parameter Case 427
14.2.1 Alternative Objective Functions 427
14.2.2 Problem Definition and Convergence Characterization 429
14.2.3 Convergence Properties: Dependence on Parameters 432
14.2.4 Choosing the Compensator 434
14.2.5 Computing tr[?0* ?0]: Discrete State Space Systems 435
14.2.6 Choosing Parameters in J(?) 437
14.2.7 Iteration Dynamics 439
14.2.8 Plateauing/Flatlining Phenomena 439
14.2.9 Switching Algorithms 444
14.3 Robustness of POILC: The Single Parameter Case 448
14.3.1 Robustness Using the Right Inverse 448
14.3.2 Robustness: A More General Case 450
14.4 Multi-Parameter Learning Control 452
14.4.1 The Form of the Parameterization 452
14.4.2 Alternative Forms for ?? and the Objective Function 453
14.4.3 The Multi-parameter POILC Algorithm 456
14.4.4 Choice of Multi-parameter Parameterization 458
14.5 Discussion and Further Reading 460
14.5.1 Chapter Overview 460
14.5.2 High Order POILC: A Brief Summary 462
References 463
Index 468

Erscheint lt. Verlag 31.10.2015
Reihe/Serie Advances in Industrial Control
Zusatzinfo XXVIII, 456 p.
Verlagsort London
Sprache englisch
Themenwelt Informatik Theorie / Studium Algorithmen
Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Technik Elektrotechnik / Energietechnik
Technik Maschinenbau
Schlagworte Control Applications • control engineering • Control Theory • Iterative Learning Control • parameter optimization • Signal Optimization
ISBN-10 1-4471-6772-4 / 1447167724
ISBN-13 978-1-4471-6772-3 / 9781447167723
Haben Sie eine Frage zum Produkt?
PDFPDF (Wasserzeichen)
Größe: 4,7 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Zusätzliches Feature: Online Lesen
Dieses eBook können Sie zusätzlich zum Download auch online im Webbrowser lesen.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Learn asynchronous programming by building working examples of …

von Carl Fredrik Samson

eBook Download (2024)
Packt Publishing Limited (Verlag)
28,79