Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Visually stimulated motor control for a robot with a pair of LGMD visual neural networks

Visually stimulated motor control for a robot with a pair of LGMD visual neural networks In this paper, we proposed a visually stimulated motor control (VSMC) system for autonomous navigation of mobile robots. Inspired from a locusts’ motion sensitive interneuron – lobula giant movement detector (LGMD), the presented VSMC system enables a robot exploring local paths or interacting with dynamic objects effectively using visual input only. The VSMC consists of a pair of LGMD visual neural networks and a simple motor command generator. Each LGMD processes images covering part of the wide field of view and extracts relevant visual cues. The outputs from the two LGMDs are compared and interpreted into executable motor commands directly. These motor commands are then executed by the robot’s wheel control system in real-time to generate corresponded motion adjustment accordingly. Our experiments showed that this bio-inspired VSMC system worked well in different scenarios. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Advanced Mechatronic Systems Inderscience Publishers

Visually stimulated motor control for a robot with a pair of LGMD visual neural networks

Loading next page...
 
/lp/inderscience-publishers/visually-stimulated-motor-control-for-a-robot-with-a-pair-of-lgmd-87kaToZlxH

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Inderscience Publishers
Copyright
Copyright © Inderscience Enterprises Ltd. All rights reserved
ISSN
1756-8412
eISSN
1756-8420
DOI
10.1504/IJAMECHS.2012.052219
Publisher site
See Article on Publisher Site

Abstract

In this paper, we proposed a visually stimulated motor control (VSMC) system for autonomous navigation of mobile robots. Inspired from a locusts’ motion sensitive interneuron – lobula giant movement detector (LGMD), the presented VSMC system enables a robot exploring local paths or interacting with dynamic objects effectively using visual input only. The VSMC consists of a pair of LGMD visual neural networks and a simple motor command generator. Each LGMD processes images covering part of the wide field of view and extracts relevant visual cues. The outputs from the two LGMDs are compared and interpreted into executable motor commands directly. These motor commands are then executed by the robot’s wheel control system in real-time to generate corresponded motion adjustment accordingly. Our experiments showed that this bio-inspired VSMC system worked well in different scenarios.

Journal

International Journal of Advanced Mechatronic SystemsInderscience Publishers

Published: Jan 1, 2012

There are no references for this article.