Warehouse Stock Clearance Sale

Grab a bargain today!


Sign Up for Fishpond's Best Deals Delivered to You Every Day
Go
Human-Robot Interaction ­Control Using Reinforcement ­Learning
IEEE Press Series on Systems Science and Engineering

Rating
Format
Hardback, 288 pages
Published
United States, 5 November 2021

A comprehensive exploration of the control schemes of human-robot interactions In Human-Robot Interaction Control Using Reinforcement Learning, an expert team of authors delivers a concise overview of human-robot interaction control schemes and insightful presentations of novel, model-free and reinforcement learning controllers. The book begins with a brief introduction to state-of-the-art human-robot interaction control and reinforcement learning before moving on to describe the typical environment model. The authors also describe some of the most famous identification techniques for parameter estimation. Human-Robot Interaction Control Using Reinforcement Learning offers rigorous mathematical treatments and demonstrations that facilitate the understanding of control schemes and algorithms. It also describes stability and convergence analysis of human-robot interaction control and reinforcement learning based control. The authors also discuss advanced and cutting-edge topics, like inverse and velocity kinematics solutions, H2 neural control, and likely upcoming developments in the field of robotics. Readers will also enjoy: * A thorough introduction to model-based human-robot interaction control * Comprehensive explorations of model-free human-robot interaction control and human-in-the-loop control using Euler angles * Practical discussions of reinforcement learning for robot position and force control, as well as continuous time reinforcement learning for robot force control * In-depth examinations of robot control in worst-case uncertainty using reinforcement learning and the control of redundant robots using multi-agent reinforcement learning Perfect for senior undergraduate and graduate students, academic researchers, and industrial practitioners studying and working in the fields of robotics, learning control systems, neural networks, and computational intelligence, Human-Robot Interaction Control Using Reinforcement Learning is also an indispensable resource for students and professionals studying reinforcement learning.


WEN YU, PhD, is Professor and Head of the Departamento de Control Automático with the Centro de Investigación y de Estudios Avanzados, Instituto Politécnico Nacional (CINVESTAV-IPN), Mexico City, Mexico. He is a co-author of Modeling and Control of Uncertain Nonlinear Systems with Fuzzy Equations and Z-Number. ADOLFO PERRUSQUÍA, PhD, is a Research Fellow in the School of Aerospace, Transport, and Manufacturing at Cranfield University in Bedford, UK.


Author Biographies xi List of Figures xiii List of Tables xvii Preface xix Part I Human-robot Interaction Control 1 1 Introduction 3 1.1 Human-Robot Interaction Control 3 1.2 Reinforcement Learning for Control 6 1.3 Structure of the Book 7 References 10 2 Environment Model of Human-Robot Interaction 17 2.1 Impedance and Admittance 17 2.2 Impedance Model for Human-Robot Interaction 21 2.3 Identification of Human-Robot Interaction Model 24 2.4 Conclusions 30 References 30 3 Model Based Human-Robot Interaction Control 33 3.1 Task Space Impedance/Admittance Control 33 3.2 Joint Space Impedance Control 36 3.3 Accuracy and Robustness 37 3.4 Simulations 39 3.5 Conclusions 42 References 44 4 Model Free Human-Robot Interaction Control 45 4.1 Task-Space Control Using Joint-Space Dynamics 45 4.2 Task-Space Control Using Task-Space Dynamics 52 4.3 Joint Space Control 53 4.4 Simulations 54 4.5 Experiments 55 4.6 Conclusions 68 References 71 5 Human-in-the-loop Control Using Euler Angles 73 5.1 Introduction 73 5.2 Joint-Space Control 74 5.3 Task-Space Control 79 5.4 Experiments 83 5.5 Conclusions 92 References 94 Part II Reinforcement Learning for Robot Interaction Control 97 6 Reinforcement Learning for Robot Position/Force Control 99 6.1 Introduction 99 6.2 Position/Force Control Using an Impedance Model 100 6.3 Reinforcement Learning Based Position/Force Control 103 6.4 Simulations and Experiments 110 6.5 Conclusions 117 References 117 7 Continuous-Time Reinforcement Learning for Force Control 119 7.1 Introduction 119 7.2 K-means Clustering for Reinforcement Learning 120 7.3 Position/Force Control Using Reinforcement Learning 124 7.4 Experiments 130 7.5 Conclusions 136 References 136 8 Robot Control in Worst-Case Uncertainty Using Reinforcement Learning 139 8.1 Introduction 139 8.2 Robust Control Using Discrete-Time Reinforcement Learning 141 8.3 Double Q-Learning with k-Nearest Neighbors 144 8.4 Robust Control Using Continuous-Time Reinforcement Learning 150 8.5 Simulations and Experiments: Discrete-Time Case 154 8.6 Simulations and Experiments: Continuous-Time Case 161 8.7 Conclusions 170 References 170 9 Redundant Robots Control Using Multi-Agent Reinforcement Learning 173 9.1 Introduction 173 9.2 Redundant Robot Control 175 9.3 Multi-Agent Reinforcement Learning for Redundant Robot Control 179 9.4 Simulations and experiments 183 9.5 Conclusions 187 References 189 10 Robot H2 Neural Control Using Reinforcement Learning 193 10.1 Introduction 193 10.2 H2 Neural Control Using Discrete-Time Reinforcement Learning 194 10.3 H2 Neural Control in Continuous Time 207 10.4 Examples 219 10.5 Conclusion 229 References 229 11 Conclusions 233 A Robot Kinematics and Dynamics 235 A.1 Kinematics 235 A.2 Dynamics 237 A.3 Examples 240 References 246 B Reinforcement Learning for Control 247 B.1 Markov decision processes 247 B.2 Value functions 248 B.3 Iterations 250 B.4 TD learning 251 Reference 258 Index 259

Show more

Our Price
£94.52
Elsewhere
£117.95
Save £23.43 (20%)
Ships from Australia Estimated delivery date: 17th Apr - 25th Apr from Australia
Free Shipping Worldwide

Buy Together
+
Buy together with A Spy in Canaan at a great price!
Buy Together
£120.11

Product Description

A comprehensive exploration of the control schemes of human-robot interactions In Human-Robot Interaction Control Using Reinforcement Learning, an expert team of authors delivers a concise overview of human-robot interaction control schemes and insightful presentations of novel, model-free and reinforcement learning controllers. The book begins with a brief introduction to state-of-the-art human-robot interaction control and reinforcement learning before moving on to describe the typical environment model. The authors also describe some of the most famous identification techniques for parameter estimation. Human-Robot Interaction Control Using Reinforcement Learning offers rigorous mathematical treatments and demonstrations that facilitate the understanding of control schemes and algorithms. It also describes stability and convergence analysis of human-robot interaction control and reinforcement learning based control. The authors also discuss advanced and cutting-edge topics, like inverse and velocity kinematics solutions, H2 neural control, and likely upcoming developments in the field of robotics. Readers will also enjoy: * A thorough introduction to model-based human-robot interaction control * Comprehensive explorations of model-free human-robot interaction control and human-in-the-loop control using Euler angles * Practical discussions of reinforcement learning for robot position and force control, as well as continuous time reinforcement learning for robot force control * In-depth examinations of robot control in worst-case uncertainty using reinforcement learning and the control of redundant robots using multi-agent reinforcement learning Perfect for senior undergraduate and graduate students, academic researchers, and industrial practitioners studying and working in the fields of robotics, learning control systems, neural networks, and computational intelligence, Human-Robot Interaction Control Using Reinforcement Learning is also an indispensable resource for students and professionals studying reinforcement learning.


WEN YU, PhD, is Professor and Head of the Departamento de Control Automático with the Centro de Investigación y de Estudios Avanzados, Instituto Politécnico Nacional (CINVESTAV-IPN), Mexico City, Mexico. He is a co-author of Modeling and Control of Uncertain Nonlinear Systems with Fuzzy Equations and Z-Number. ADOLFO PERRUSQUÍA, PhD, is a Research Fellow in the School of Aerospace, Transport, and Manufacturing at Cranfield University in Bedford, UK.


Author Biographies xi List of Figures xiii List of Tables xvii Preface xix Part I Human-robot Interaction Control 1 1 Introduction 3 1.1 Human-Robot Interaction Control 3 1.2 Reinforcement Learning for Control 6 1.3 Structure of the Book 7 References 10 2 Environment Model of Human-Robot Interaction 17 2.1 Impedance and Admittance 17 2.2 Impedance Model for Human-Robot Interaction 21 2.3 Identification of Human-Robot Interaction Model 24 2.4 Conclusions 30 References 30 3 Model Based Human-Robot Interaction Control 33 3.1 Task Space Impedance/Admittance Control 33 3.2 Joint Space Impedance Control 36 3.3 Accuracy and Robustness 37 3.4 Simulations 39 3.5 Conclusions 42 References 44 4 Model Free Human-Robot Interaction Control 45 4.1 Task-Space Control Using Joint-Space Dynamics 45 4.2 Task-Space Control Using Task-Space Dynamics 52 4.3 Joint Space Control 53 4.4 Simulations 54 4.5 Experiments 55 4.6 Conclusions 68 References 71 5 Human-in-the-loop Control Using Euler Angles 73 5.1 Introduction 73 5.2 Joint-Space Control 74 5.3 Task-Space Control 79 5.4 Experiments 83 5.5 Conclusions 92 References 94 Part II Reinforcement Learning for Robot Interaction Control 97 6 Reinforcement Learning for Robot Position/Force Control 99 6.1 Introduction 99 6.2 Position/Force Control Using an Impedance Model 100 6.3 Reinforcement Learning Based Position/Force Control 103 6.4 Simulations and Experiments 110 6.5 Conclusions 117 References 117 7 Continuous-Time Reinforcement Learning for Force Control 119 7.1 Introduction 119 7.2 K-means Clustering for Reinforcement Learning 120 7.3 Position/Force Control Using Reinforcement Learning 124 7.4 Experiments 130 7.5 Conclusions 136 References 136 8 Robot Control in Worst-Case Uncertainty Using Reinforcement Learning 139 8.1 Introduction 139 8.2 Robust Control Using Discrete-Time Reinforcement Learning 141 8.3 Double Q-Learning with k-Nearest Neighbors 144 8.4 Robust Control Using Continuous-Time Reinforcement Learning 150 8.5 Simulations and Experiments: Discrete-Time Case 154 8.6 Simulations and Experiments: Continuous-Time Case 161 8.7 Conclusions 170 References 170 9 Redundant Robots Control Using Multi-Agent Reinforcement Learning 173 9.1 Introduction 173 9.2 Redundant Robot Control 175 9.3 Multi-Agent Reinforcement Learning for Redundant Robot Control 179 9.4 Simulations and experiments 183 9.5 Conclusions 187 References 189 10 Robot H2 Neural Control Using Reinforcement Learning 193 10.1 Introduction 193 10.2 H2 Neural Control Using Discrete-Time Reinforcement Learning 194 10.3 H2 Neural Control in Continuous Time 207 10.4 Examples 219 10.5 Conclusion 229 References 229 11 Conclusions 233 A Robot Kinematics and Dynamics 235 A.1 Kinematics 235 A.2 Dynamics 237 A.3 Examples 240 References 246 B Reinforcement Learning for Control 247 B.1 Markov decision processes 247 B.2 Value functions 248 B.3 Iterations 250 B.4 TD learning 251 Reference 258 Index 259

Show more
Product Details
EAN
9781119782742
ISBN
1119782740
Dimensions
22.9 x 15.2 x 1.8 centimeters (0.67 kg)

Table of Contents

Author Biographies xi

List of Figures xiii

List of Tables xvii

Preface xix

Part I Human-robot Interaction Control 1

1 Introduction 3

1.1 Human-Robot Interaction Control 3

1.2 Reinforcement Learning for Control 6

1.3 Structure of the Book 7

References 10

2 Environment Model of Human-Robot Interaction 17

2.1 Impedance and Admittance 17

2.2 Impedance Model for Human-Robot Interaction 21

2.3 Identification of Human-Robot Interaction Model 24

2.4 Conclusions 30

References 30

3 Model Based Human-Robot Interaction Control 33

3.1 Task Space Impedance/Admittance Control 33

3.2 Joint Space Impedance Control 36

3.3 Accuracy and Robustness 37

3.4 Simulations 39

3.5 Conclusions 42

References 44

4 Model Free Human-Robot Interaction Control 45

4.1 Task-Space Control Using Joint-Space Dynamics 45

4.2 Task-Space Control Using Task-Space Dynamics 52

4.3 Joint Space Control 53

4.4 Simulations 54

4.5 Experiments 55

4.6 Conclusions 68

References 71

5 Human-in-the-loop Control Using Euler Angles 73

5.1 Introduction 73

5.2 Joint-Space Control 74

5.3 Task-Space Control 79

5.4 Experiments 83

5.5 Conclusions 92

References 94

Part II Reinforcement Learning for Robot Interaction Control 97

6 Reinforcement Learning for Robot Position/Force Control 99

6.1 Introduction 99

6.2 Position/Force Control Using an Impedance Model 100

6.3 Reinforcement Learning Based Position/Force Control 103

6.4 Simulations and Experiments 110

6.5 Conclusions 117

References 117

7 Continuous-Time Reinforcement Learning for Force Control 119

7.1 Introduction 119

7.2 K-means Clustering for Reinforcement Learning 120

7.3 Position/Force Control Using Reinforcement Learning 124

7.4 Experiments 130

7.5 Conclusions 136

References 136

8 Robot Control in Worst-Case Uncertainty Using Reinforcement Learning 139

8.1 Introduction 139

8.2 Robust Control Using Discrete-Time Reinforcement Learning 141

8.3 Double Q-Learning with k-Nearest Neighbors 144

8.4 Robust Control Using Continuous-Time Reinforcement Learning 150

8.5 Simulations and Experiments: Discrete-Time Case 154

8.6 Simulations and Experiments: Continuous-Time Case 161

8.7 Conclusions 170

References 170

9 Redundant Robots Control Using Multi-Agent Reinforcement Learning 173

9.1 Introduction 173

9.2 Redundant Robot Control 175

9.3 Multi-Agent Reinforcement Learning for Redundant Robot Control 179

9.4 Simulations and experiments 183

9.5 Conclusions 187

References 189

10 Robot H2 Neural Control Using Reinforcement Learning 193

10.1 Introduction 193

10.2 H2 Neural Control Using Discrete-Time Reinforcement Learning 194

10.3 H2 Neural Control in Continuous Time 207

10.4 Examples 219

10.5 Conclusion 229

References 229

11 Conclusions 233

A Robot Kinematics and Dynamics 235

A.1 Kinematics 235

A.2 Dynamics 237

A.3 Examples 240

References 246

B Reinforcement Learning for Control 247

B.1 Markov decision processes 247

B.2 Value functions 248

B.3 Iterations 250

B.4 TD learning 251

Reference 258

Index 259

About the Author

WEN YU, PhD, is Professor and Head of the Departamento de Control Automático with the Centro de Investigación y de Estudios Avanzados, Instituto Politécnico Nacional (CINVESTAV-IPN), Mexico City, Mexico. He is a co-author of Modeling and Control of Uncertain Nonlinear Systems with Fuzzy Equations and Z-Number.

ADOLFO PERRUSQUÍA, PhD, is a Research Fellow in the School of Aerospace, Transport, and Manufacturing at Cranfield University in Bedford, UK.

Show more
Review this Product
Ask a Question About this Product More...
 
Item ships from and is sold by Fishpond Retail Limited.

Back to top
We use essential and some optional cookies to provide you the best shopping experience. Visit our cookies policy page for more information.