Access the full text.
Sign up today, get DeepDyve free for 14 days.
References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.
Global localisation is a very fundamental and challenging problem in robotics. This paper presents a new method for mobile robots to recognise scenes with the use of a single camera and natural landmarks. In a learning step, the robot is manually guided on a path. A video sequence is acquired with a font-looking camera. To reduce the perceptual alias of features easily confused, we propose a modified visual feature descriptor which combines colour information and local structure. A location features vocabulary model is built for each individual location by an unsupervised learning algorithm. In the course of travelling, the robot uses each detected interest point to vote for the most likely location. In the case of perceptual aliasing caused by dynamic change or visual similarity, a Bayesian filter is used to increase the robustness of location recognition. Experiments are conducted to prove that application of the proposed feature can largely reduce wrong matches and performance of proposed method is reliable.
International Journal of Advanced Mechatronic Systems – Inderscience Publishers
Published: Jan 1, 2010
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.