Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Fuzzy adaptive extended Kalman filter for robot 3D pose estimation

Fuzzy adaptive extended Kalman filter for robot 3D pose estimation PurposeEstimating the pose – position and orientation – of a moving object such as a robot is a necessary task for many applications, e.g., robot navigation control, environment mapping, and medical applications such as robotic surgery. The purpose of this paper is to introduce a novel method to fuse the information from several available sensors in order to improve the estimated pose from any individual sensor and calculate a more accurate pose for the moving platform.Design/methodology/approachPose estimation is usually done by collecting the data obtained from several sensors mounted on the object/platform and fusing the acquired information. Assuming that the robot is moving in a three-dimensional (3D) world, its location is completely defined by six degrees of freedom (6DOF): three angles and three position coordinates. Some 3D sensors, such as IMUs and cameras, have been widely used for 3D localization. Yet, there are other sensors, like 2D Light Detection And Ranging (LiDAR), which can give a very precise estimation in a 2D plane but they are not employed for 3D estimation since the sensor is unable to obtain the full 6DOF. However, in some applications there is a considerable amount of time in which the robot is almost moving on a plane during the time interval between two sensor readings; e.g., a ground vehicle moving on a flat surface or a drone flying at an almost constant altitude to collect visual data. In this paper a novel method using a “fuzzy inference system” is proposed that employs a 2D LiDAR in a 3D localization algorithm in order to improve the pose estimation accuracy.FindingsThe method determines the trajectory of the robot and the sensor reliability between two readings and based on this information defines the weight of the 2D sensor in the final fused pose by adjusting “extended Kalman filter” parameters. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.Originality/valueTo the best of the authors’ knowledge this is the first time that a 2D LiDAR has been employed to improve the 3D pose estimation in an unknown environment without any previous knowledge. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Intelligent Unmanned Systems Emerald Publishing

Fuzzy adaptive extended Kalman filter for robot 3D pose estimation

Fuzzy adaptive extended Kalman filter for robot 3D pose estimation

International Journal of Intelligent Unmanned Systems , Volume 6 (2): 19 – Apr 16, 2018

Abstract

PurposeEstimating the pose – position and orientation – of a moving object such as a robot is a necessary task for many applications, e.g., robot navigation control, environment mapping, and medical applications such as robotic surgery. The purpose of this paper is to introduce a novel method to fuse the information from several available sensors in order to improve the estimated pose from any individual sensor and calculate a more accurate pose for the moving platform.Design/methodology/approachPose estimation is usually done by collecting the data obtained from several sensors mounted on the object/platform and fusing the acquired information. Assuming that the robot is moving in a three-dimensional (3D) world, its location is completely defined by six degrees of freedom (6DOF): three angles and three position coordinates. Some 3D sensors, such as IMUs and cameras, have been widely used for 3D localization. Yet, there are other sensors, like 2D Light Detection And Ranging (LiDAR), which can give a very precise estimation in a 2D plane but they are not employed for 3D estimation since the sensor is unable to obtain the full 6DOF. However, in some applications there is a considerable amount of time in which the robot is almost moving on a plane during the time interval between two sensor readings; e.g., a ground vehicle moving on a flat surface or a drone flying at an almost constant altitude to collect visual data. In this paper a novel method using a “fuzzy inference system” is proposed that employs a 2D LiDAR in a 3D localization algorithm in order to improve the pose estimation accuracy.FindingsThe method determines the trajectory of the robot and the sensor reliability between two readings and based on this information defines the weight of the 2D sensor in the final fused pose by adjusting “extended Kalman filter” parameters. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.Originality/valueTo the best of the authors’ knowledge this is the first time that a 2D LiDAR has been employed to improve the 3D pose estimation in an unknown environment without any previous knowledge. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.

Loading next page...
 
/lp/emerald-publishing/fuzzy-adaptive-extended-kalman-filter-for-robot-3d-pose-estimation-V6DXSc2w0s

References (33)

Publisher
Emerald Publishing
Copyright
Copyright © Emerald Group Publishing Limited
ISSN
2049-6427
DOI
10.1108/IJIUS-12-2017-0014
Publisher site
See Article on Publisher Site

Abstract

PurposeEstimating the pose – position and orientation – of a moving object such as a robot is a necessary task for many applications, e.g., robot navigation control, environment mapping, and medical applications such as robotic surgery. The purpose of this paper is to introduce a novel method to fuse the information from several available sensors in order to improve the estimated pose from any individual sensor and calculate a more accurate pose for the moving platform.Design/methodology/approachPose estimation is usually done by collecting the data obtained from several sensors mounted on the object/platform and fusing the acquired information. Assuming that the robot is moving in a three-dimensional (3D) world, its location is completely defined by six degrees of freedom (6DOF): three angles and three position coordinates. Some 3D sensors, such as IMUs and cameras, have been widely used for 3D localization. Yet, there are other sensors, like 2D Light Detection And Ranging (LiDAR), which can give a very precise estimation in a 2D plane but they are not employed for 3D estimation since the sensor is unable to obtain the full 6DOF. However, in some applications there is a considerable amount of time in which the robot is almost moving on a plane during the time interval between two sensor readings; e.g., a ground vehicle moving on a flat surface or a drone flying at an almost constant altitude to collect visual data. In this paper a novel method using a “fuzzy inference system” is proposed that employs a 2D LiDAR in a 3D localization algorithm in order to improve the pose estimation accuracy.FindingsThe method determines the trajectory of the robot and the sensor reliability between two readings and based on this information defines the weight of the 2D sensor in the final fused pose by adjusting “extended Kalman filter” parameters. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.Originality/valueTo the best of the authors’ knowledge this is the first time that a 2D LiDAR has been employed to improve the 3D pose estimation in an unknown environment without any previous knowledge. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.

Journal

International Journal of Intelligent Unmanned SystemsEmerald Publishing

Published: Apr 16, 2018

There are no references for this article.