Advances in Robot Learning
8th European Workhop on Learning Robots, EWLR-8 Lausanne, Switzerland, September 18, 1999 Proceedings
Seiten
2000
|
2000
Springer Berlin (Verlag)
978-3-540-41162-8 (ISBN)
Springer Berlin (Verlag)
978-3-540-41162-8 (ISBN)
Robot learning is an exciting and interdisciplinary ?eld. This state is re?ected in the range and form of the papers presented here. Techniques that have - come well established in robot learning are present: evolutionary methods, neural networkapproaches, reinforcement learning; as are techniques from control t- ory, logic programming, and Bayesian statistics. It is notalbe that in many of the papers presented in this volume several of these techniques are employed in conjunction. In papers by Nehmzow, Grossmann and Quoy neural networks are utilised to provide landmark-based representations of the environment, but di?erent techniques are used in each paper to make inferences based on these representations. Biology continues to provide inspiration for the robot learning researcher. In their paper Peter Eggenberger et al. borrow ideas about the role of n- romodulators in switching neural circuits, These are combined with standard techniques from arti?cial neural networks and evolutionary computing to p- vide a powerful new algorithm for evolving robot controllers. In the ?nal paper in this volume Bianco and Cassinis combine observations about the navigation behaviour of insects with techniques from control theory to produce their visual landmarklearning system. Hopefully this convergence of engineering and biol- ical approaches will continue. A rigourous understanding of the ways techniques from these very di?erent disciplines can be fused is an important challenge if progress is to continue. Al these papers are also testament to the utility of using robots to study intelligence and adaptive behaviour.
Map Building through Self-Organisation for Robot Navigation.- Learning a Navigation Task in Changing Environments by Multi-task Reinforcement Learning.- Toward Seamless Transfer from Simulated to Real Worlds: A Dynamically-Rearranging Neural Network Approach.- How Does a Robot Find Redundancy by Itself?.- Learning Robot Control by Relational Concept Induction with Iteratively Collected Examples.- Reinforcement Learning in Situated Agents: Theoretical Problems and Practical Solutions.- A Planning Map for Mobile Robots: Speed Control and Paths Finding in a Changing Environment.- Probabilistic and Count Methods in Map Building for Autonomous Mobile Robots.- Biologically-Inspired Visual Landmark Learning for Mobile Robots.
Erscheint lt. Verlag | 11.10.2000 |
---|---|
Reihe/Serie | Lecture Notes in Artificial Intelligence | Lecture Notes in Computer Science |
Zusatzinfo | VIII, 172 p. |
Verlagsort | Berlin |
Sprache | englisch |
Maße | 155 x 233 mm |
Gewicht | 314 g |
Themenwelt | Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik |
Technik ► Elektrotechnik / Energietechnik | |
Schlagworte | Algorithmic Learning • autonom • autonomous robot • Autonomous Robots • Hardcover, Softcover / Informatik, EDV/Informatik • HC/Informatik, EDV/Informatik • Intelligent Agents • intelligent robots • learning • machine learning • Maschinelles Lernen • Mobile Robot • Mobile Robots • Navigation • Reinforcement Learning • robot • Robot control • Robotik • Robot Learning • Robot Navigation |
ISBN-10 | 3-540-41162-3 / 3540411623 |
ISBN-13 | 978-3-540-41162-8 / 9783540411628 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
Mehr entdecken
aus dem Bereich
aus dem Bereich
was sie kann & was uns erwartet
Buch | Softcover (2023)
C.H.Beck (Verlag)
18,00 €
von absurd bis tödlich: Die Tücken der künstlichen Intelligenz
Buch | Softcover (2023)
Heyne (Verlag)
20,00 €
dem Menschen überlegen – wie KI uns rettet und bedroht
Buch | Hardcover (2023)
Droemer (Verlag)
24,00 €