Access the full text.
Sign up today, get DeepDyve free for 14 days.
John Williamson, Steven Strachan, R. Murray-Smith (2006)
It's a long way to Monte Carlo: probabilistic display in GPS navigation
P. Voulgaris (1995)
On optimal ℓ∞ to ℓ∞ filteringAutom., 31
Klemen Trajkovski, Oskar Sterle, B. Stopar (2010)
Sturdy Positioning with High Sensitivity GPS Sensors Under Adverse ConditionsSensors (Basel, Switzerland), 10
Alexander Kleiner, Dali Sun (2007)
Decentralized SLAM for pedestrians without direct communication2007 IEEE/RSJ International Conference on Intelligent Robots and Systems
O. Cappé, S. Godsill, É. Moulines (2007)
An Overview of Existing Methods and Recent Advances in Sequential Monte CarloProceedings of the IEEE, 95
(2012)
Information Website about the GPS System Available online
R. Jirawimut, P. Ptasinski, V. Garaj, F. Cecelja, W. Balachandran (2001)
A method for dead reckoning parameter correction in pedestrian navigation systemIMTC 2001. Proceedings of the 18th IEEE Instrumentation and Measurement Technology Conference. Rediscovering Measurement in the Age of Informatics (Cat. No.01CH 37188), 3
R. Szeliski, D. Scharstein (2002)
Symmetric Sub-Pixel Stereo Matching
Widyawan, M. Klepal, S. Beauregard (2008)
A Backtracking Particle Filter for fusing building plans with PDR displacement estimates2008 5th Workshop on Positioning, Navigation and Communication
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license
O. Mezentsev, G. Lachapelle, J. Collin (2005)
PEDESTRIAN DEAD RECKONING— A SOLUTION TO NAVIGATION IN GPS SIGNAL DEGRADED AREAS?Geoinformatica, 59
Szymon Ceranka, M. Niedźwiecki (2003)
Application of particle filtering in navigation system for blindSeventh International Symposium on Signal Processing and Its Applications, 2003. Proceedings., 2
(2009)
Generating Assisted Data
Michael Montemerlo, S. Thrun (2007)
FastSLAM: A Scalable Method for the Simultaneous Localization and Mapping Problem in Robotics
D. Howe, D. Allan, J. Barnes (1981)
Properties of Signal Sources and Measurement Methods
S. Kong (2011)
Statistical Analysis of Urban GPS Multipaths and Pseudo-Range Measurement ErrorsIEEE Transactions on Aerospace and Electronic Systems, 47
J. Bancroft, G. Lachapelle (2011)
Data Fusion Algorithms for Multiple Inertial Measurement UnitsSensors (Basel, Switzerland), 11
(1997)
Transformation of Random Variables. In Introduction to Random Signals and Applied Kalman Filtering
Lei Fang, P. Antsaklis, L. Montestruque, M. McMickell, M. Lemmon, Yashan Sun, Hui Fang, Ioannis Koutroulis, M. Haenggi, Min Xie, Xiaojuan Xie (2005)
Design of a wireless assisted pedestrian dead reckoning system - the NavMote experienceIEEE Transactions on Instrumentation and Measurement, 54
Q. Ladetto, B. Merminod (2002)
An Alternative Approach to Vision Techniques - Pedestrian Navigation System based on Digital Magnetic Compass and Gyroscope Integration
S. Godha, G. Lachapelle, M. Cannon, S. Godha (2006)
Integrated GPS/INS System for Pedestrian Navigation in a Signal Degraded Environment
R. Jirawimut, M. Shah, P. Ptasinski, F. Cecelja, W. Balachandran (2000)
Integrated DGPS and Dead Reckoning for A Pedestrian Navigation System in Signal Blocked Environments
(2007)
FastSLAMA Scalable Method for the Simultaneous Localization and Mapping Problem in Robotics
P. Baranski, M. Polanczyk, P. Strumiłło (2011)
Fusion of data from inertial sensors, raster maps and GPS for estimation of pedestrian geographic location in urban terrainMetrology and Measurement Systems, 18
Richard Ong, M. Petovello, G. Lachapelle (2009)
Assessment of GPS/GLONASS RTK Under Various Operational Conditions
F. Cecelja, W. Balachandran, R. Jirawimut (2003)
A stereo vision system for pedestrian navigation17th International Conference on Applied Electromagnetics and Communications, 2003. ICECom 2003.
(1997)
Transformation of Random VariablesIntroduction to Random Signals and Applied Kalman Filtering
A. Burguera, Yolanda Cid, G. Oliver (2009)
Sonar Sensor Models and Their Application to Mobile Robot LocalizationSensors (Basel, Switzerland), 9
S. Spek, J. Schaick, P. Bois, R. Haan (2009)
Sensing Human Activity: GPS TrackingSensors, 9
M. Brown, Darius Burschka, Gregory Hager (2003)
Advances in Computational StereoIEEE Trans. Pattern Anal. Mach. Intell., 25
Jeffrey Rodríguez, J. Aggarwal (1988)
Quantization error in stereo imagingProceedings CVPR '88: The Computer Society Conference on Computer Vision and Pattern Recognition
(2012)
User's Handbook On Datum Transformations Involving WGS 84 Available online: www.iho.int/iho pubs/standard
S. Taghipour, M. Meybodi, A. Taghipour (2008)
An Algorithm for Map Matching For Car Navigation System2008 3rd International Conference on Information and Communication Technologies: From Theory to Applications
(2001)
Revision of IEEE Std 528-1994)
G. Girard, S. Côté, S. Zlatanova, Yannick Barette, J. St-Pierre, P. Oosterom (2011)
Indoor Pedestrian Navigation Using Foot-Mounted IMU and Portable Ultrasound Range SensorsSensors (Basel, Switzerland), 11
S. Park, Y. Suh (2010)
A Zero Velocity Detection Algorithm Using Inertial Sensors for Pedestrian Navigation SystemsSensors (Basel, Switzerland), 10
O. Woodman (2007)
An introduction to inertial navigation
(2003)
A Method for Dead Reckoning Parameter Correction in Pedestrian Navigation SystemIEEE Trans. Instrum. Measur., 52
E. Krakiwsky, C. Harris, R. Wong (1988)
A Kalman filter for integrating dead reckoning, map matching and GPS positioningIEEE PLANS '88.,Position Location and Navigation Symposium, Record. 'Navigation into the 21st Century'.
Joel Hesch, S. Roumeliotis (2007)
An Indoor Localization Aid for the Visually ImpairedProceedings 2007 IEEE International Conference on Robotics and Automation
Marko Modsching, R. Kramer (2006)
Field trial on GPS Accuracy in a medium size city: The influence of built- up 1
Sensors 2012, 12, 6764-6801; doi:10.3390/s120606764 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article Enhancing Positioning Accuracy in Urban Terrain by Fusing Data from a GPS Receiver, Inertial Sensors, Stereo-Camera and Digital Maps for Pedestrian Navigation Baranski Przemyslaw and Strumillo Pawel Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, 90-924 Lodz, Poland; E-Mail: pawel.strumillo@p.lodz.pl Author to whom correspondence should be addressed; E-Mail: przemyslaw.baranski@p.lodz.pl; Tel.: +48-42-631-2685. Received: 6 March 2012; in revised form: 19 April 2012 / Accepted: 29 April 2012 / Published: 25 May 2012 Abstract: The paper presents an algorithm for estimating a pedestrian location in an urban environment. The algorithm is based on the particle ﬁlter and uses different data sources: a GPS receiver, inertial sensors, probability maps and a stereo camera. Inertial sensors are used to estimate a relative displacement of a pedestrian. A gyroscope estimates a change in the heading direction. An accelerometer is used to count a pedestrian’s steps and their lengths. The so-called probability maps help to limit GPS inaccuracy by imposing constraints on pedestrian kinematics, e.g., it is assumed that a pedestrian cannot cross buildings, fences etc. This limits position inaccuracy to ca. 10 m. Incorporation of depth estimates derived from a stereo camera that are compared to the 3D model of an environment has enabled further reduction of positioning errors. As a result, for 90% of the time, the algorithm is able to estimate a pedestrian location with an error smaller than 2 m, compared to an error of 6.5 m for a navigation based solely on GPS. Keywords: particle ﬁltering; stereovision; digital maps; GPS; Monte Carlo; dead reckoning 1. Introduction GPS-NAVSTAR (Global Positioning System-NAVigation Signal Timing And Ranging), popularly known as the GPS system, has considerably gained in civilian interests, since May 2000. Earlier, the Sensors 2012, 12 6765 system was practically reserved for military purposes. The positioning accuracy for the civilian sector was ca. 100 m due to an intentional error, called Selective Availability. According to report [1], the horizontal positioning error is less than 17 m for 99% of the time in average conditions or 17 m for 90% of the time in worse outdoor conditions. The error depends on many factors, like atmospheric conditions, sun activity, geographical location, terrain type, satellites’ constellation, etc. In an open space, positioning errors are of ca. 2–3 m. However, in dense built-up areas, the location error may reach 100 m [2,3] or even more [4]. The error is introduced due to multipath propagation of signals transmitted by the satellites when there is no line-of-sight. A satellite signal is bounced off the walls of a building before ﬁnding its way to a GPS receiver. The propagation time of the signal is delayed and the GPS receiver miscalculates its location with a reference to the satellites. There are many techniques to improve the location accuracy. Along coasts, for marine purposes, special ground DGPS (Differential GPS) reference stations broadcast differential corrections that allow a GPS receiver to eliminate tropospheric, ionospheric, ephemeris and clock errors. The overall error is reduced to 10 m with accuracy decreasing by 1 m with each 150 km increase in distance from the reference station. The corrections are transmitted on a ca. 300 kHz carrier frequency. A receiver must be, however, equipped with an additional antenna [5]. A-GPS (Assisted GPS) is a technique that downloads from the Internet the data concerning GPS satellite constellation [6]. Otherwise a GPS receiver may require up to 12.5 minutes to receive data about satellite constellation. By employing the A-GPS, the ﬁrst ﬁx is provided within few seconds. Additionally, a GSM modem can support a GPS receiver with a rough location which is obtained by measuring the strengths of signals from GSM base stations. Apart from L1 = 1,227 MHz, a second frequency called L2C = 1,575 MHz, has been made available to the civilian sector with the aim of reducing the ionospheric and tropospheric errors which are a well deﬁned function of frequency. Also, the so-called augmentation systems effectively reduce tropospheric and ionospheric errors by sending differential corrections from geostationary satellites directly to GPS receivers. The geostationary satellites imitate also GPS satellites and improve mainly the vertical accuracy. EGNOS (European Geostationary Navigation Overlay Service) works in Europe, WAAS (Wide Area Augmentation System) in the US, MSAS (Multi-functional Satellite Augmentation System) in Japan and GAGAN (GPS aided Geo-Augmented Navigation) in India. Real Time Kinematics technique achieves the accuracy of millimetres in an open space and is designated for cartographic measurements. However, two GPS receiver are necessary. The base receiver is placed in a known position and calculates tropospheric, ionospheric, ephemerids and clock errors of satellites. The corrections are sent via a radio link to a mobile GPS receiver. In the trials reported in [4] RMS error of 2 cm on a mountain high-way was noted whereas trials in urban canyons yielded a 50 m RMS error. All the above mentioned techniques are helpless against multipath propagation errors. Algorithms harnessed in car navigation alleviate these errors by taking advantage of car kinematics and a comparatively sparse network of roads, as described, e.g., in [7] or [8]. An error of 20 m is of no bigger importance to a driver. In case of navigating pedestrians, especially blind ones, the target accuracy should ideally not exceed the pavement width, i.e., ca. 2 m. Sensors 2012, 12 6766 This work presents a navigation scheme using GPS readouts, digital maps, inertial sensors and stereovision images with an aim of navigating a blind pedestrian. An accelerometer is used to detect pedestrian’s strides and estimate their length. A gyroscope serves for estimating the heading direction. Those data sources help eliminate gross positioning errors and outliers. The digital maps are used twofold. Firstly, the so-called probability map is built to eliminate improbable user transitions, like traversing water ponds, crossing walls and buildings, etc. Secondly, a 3D model of the environment is built. The model is compared to stereoscopic images recorded by a mobile stereo camera. Interestingly, this comparison of 3D geometry of the environment provides good positioning accuracy in the surrounding of buildings, where GPS readouts are compromised. The proposed scheme employs the particle ﬁlter, also known as a sequential Monte Carlo method. 2. Related Work The topic of pedestrian navigation including navigation aids for blind pedestrians has been described in many publications [9–11]. The ﬁrst step to correct GPS readouts is to apply inertial sensors. Inertial sensors are mainly used in aviation to compute the orientation and position of an aircraft. Calculating the location requires double integration of the acceleration vector. Prior to these calculations the gravity acceleration must be removed, as accelerometers cannot distinguish gravity from an aircraft’s accelerations. Therefore, the orientation of an aircraft with respect to the Earth’s surface must be calculated from gyroscopes. The technique is known as INS (Inertial Navigation Systems) and warrants very precise and expensive laser sensors using the Sagnac effect. However, the strict implementation of INS using MEMS sensors (Micro Electro-Mechanical Systems) is useless after few seconds due to errors growing quadratically with time [12]. A travelled distance of a pedestrian can be estimated with surprisingly good accuracy by measuring the length of steps. This is done by analysing the acceleration in the gravity axis [10,13]. As a person walks, the body undulates according to the strides. The technique is accurate from 0.5% to 10%, depending on the gait style. ZUPT (Zero Velocity Update) technique exploits the fact that a foot is at rest for some short period. An accelerometer must be mounted to a foot which is an inconvenience, offset however by better accuracy compared to the previous method [14]. A heading direction can simply be read out from a magnetic compass, optionally supported by a gyroscope. A compass is sensitive to local distortions of a magnetic ﬁeld due to cars, power lines etc. [10]. An electric tram can compromise a compass readouts within the radius of up to 100 m. A gyroscope, coupled by the Kalman ﬁlter, can reduce erroneous readouts [15]. The combination of GPS and inertial sensors readouts provides continuous estimates during GPS outages in harsh environments like tunnels, underground passages, dense urban areas etc. Positioning data from these two sources are usually integrated by the Extended Kalman ﬁlter or particle ﬁlter, which perform well when errors can be modelled by white noise which has the property of being uncorrelated with itself. However, the GPS errors are characterized by coloured noise [2,16]. This is because when a GPS receiver loses track of satellites its position is estimated by using the history of previous locations. Secondly, signals from satellites occluded by the same building are equally delayed, which introduces a bias in a given direction. This feature of errors corrupting GPS readouts was reported in earlier studies [14,17]. Jirawimut et al. [18] present an interesting concept where the height of buildings were Sensors 2012, 12 6767 utilized to check if a given satellite is occluded. The authors carried out simulation which yielded good results. To improve the accuracy of dead-reckoning techniques, an additional source of the absolute pedestrian location should be introduced. Experiments in [4] showed that a combination of GPS and GLONASS navigation systems reduces root-mean-square (RMS) error by half and outliers by several times. Another solution is introducing the so-called probability map as a new source of data, which deﬁnes regions of a terrain that the user is most likely to enter or cross. A pedestrian is more likely to walk along pavements, parking places, paths, etc., rather than crossing walls, water ponds, etc. This cuts down the positioning errors as they start to aggregate. This map-based navigation concept proved its usefulness in outdoor pedestrian applications [19,20] as well as indoor positioning [21–23]. In built-up areas, a probability map limits inaccuracy to a width of an urban canyon, usually delineated by buildings on both sides. A further improvement of positioning accuracy requires positioning with respect to known landmarks, i.e., the so-called exteroception. The accuracy of landmarks’ location should be approximately an order of magnitude better than the target accuracy of ca. 2 m. A vision-based technique called SLAM (Simultaneous Localization and Mapping) enables to build a map of the surrounding space and provide localization at the same time without employing any global navigation method. The SLAM based technique can be applied to build a 3D map of the environment. A review of SLAM techniques, modiﬁcations, results, etc., are given in book [24]. An application of SLAM for pedestrian navigation with results is given in [25]. Also a navigation system for the blind using a stereo-camera for tracing landmarks is presented in [26]. On the other hand, precise maps of outdoor or indoor environments may already be available, e.g., a plan of a building or plans of a city which are accurate to single centimetres, accuracy hardly achievable by any SLAM techniques. There are different sensors used to compare the environmental map with a robot’s or pedestrian’s location. Ultrasound sensors measure distance with 1% accuracy, being at the same time very cheap and compact. The maximum achievable distance is ca. 10 m. The angular resolution, however, is poor, 15 . Laser scanners provide 0.1% distance measurement accuracy with a maximum distance of ca. 50 m. The angular resolution is of a fraction of a degree. The scanning is omnidirectional. The power consumption is ca. 10 W for longer distances. The size of a laser sensor is around 5 cm by 5 cm by 5 cm. The cost is rather high (ca. $4,000 USD depending on accuracy, maximum distance etc.). These sensors are commonly used in robotics. Time-of-Flight (ToF) cameras offer similar parameters as laser sensors providing 3D reconstruction of the scanned environment. The angle of view is ca. 30 . Application of laser scanners and ToF cameras is limited to indoor environments due sunlight interference. In [27] ultrasound and laser sensing techniques are compared for indoor positioning of a robot. Work [23] presents an indoor localization system based on the particle ﬁlter. The user wears ultrasound sensors which measure a distance to adjacent walls. Therefore, the user location can be compared against the alignment of building walls. Application of an ultrasound sensor improved the positioning accuracy by 7 times in good trial conditions, i.e., with no additional objects in the building’s corridor, closed doors etc. A similar technique using a laser sensor mounted on a white cane is reported in [22]. A laser sensor detects corners of corridors what provides a comparison with a building’s plan. Sensors 2012, 12 6768 Finally, stereovision is a passive imaging technique that can be used indoors and outdoors. The cameras can be compact and inexpensive. However, stereovision imaging is limited to the environments featuring good lighting conditions, requires calibration of the stereovision optics and offers worse depth estimation accuracy than the earlier outlined active methods [28]. Moreover, demanding computations are required for calculating depth maps. In stereovision, depth accuracy can range from a few centimetres for objects located in a few metres range to a few metres for more distant objects. The so-called subpixel interpolation methods were proposed to improve depth estimation accuracy [29]. The work presented here combines global and local positioning techniques. GPS location estimates are augmented by dead reckoning techniques derived from inertial sensors, i.e., gyroscopes and accelerometers. A stereo-camera is used for distance measurements. Finally, a digital map of the outdoor terrain is incorporated into the algorithm. Each of the listed source of positioning data features different error characteristic, e.g., in open spaces GPS readouts are most accurate whereas stereovision may yield large depth estimation errors. Conversely, for city canyons where GPS accuracy falters, stereovision and the digital map offer good positioning. A particle ﬁltering algorithm is proposed to optimally fuse all data source. 3. A Prototype System for Pedestrian Positioning A block diagram of the proposed system is presented in Figure 1. From the hardware point of view, the system is made up of three parts: a PC platform, stereo camera connected through FireWire interface and an electronic module housing a GPS receiver and 6DOF sensor. The data was acquired by walking through the University Campus with the stereovision camera attached to the chest of an experimenter. The laptop stored data from the electronic module and stereo camera. Then the data was processed off-line on a PC. 3.1. Electronic Module The built electronic module, see Figure 2, is a dedicated PCB with a microcontroller, 6DOF sensor and GPS receiver. The microcontroller reads out data from the 6DOF sensor, ADIS16355 at a rate of 820 Hz. Downsampling is carried out to limit data stream to the PC. An 8th order, lowpass Chebyshev ﬁlter is used to avoid aliasing. Samples are sent to the computer at a rate of 205 Hz. The module sends data through the USB interface. 3.1.1. 6DOF Sensor The 6DOF sensor, ADIS16355 from Analog Devices, comprises a 3-axial accelerometer and 3-axial gyro. The data from the sensor is read out through a digital interface, SPI. The samples are of 14-bit resolution whereby practically the three least signiﬁcant bits are random. The ADIS16355 has been superseded by much less noisy ADIS16375. The static parameters of the former were investigated by the Allan variance [30]. Gyroscope. The gyroscope is used to estimate the pedestrian orientation. Angular velocity !(t), returned by the gyroscope, is corrupted mainly by white and ﬂicker noise. The relative direction change is calculated by integrating the angular velocity according to Equation (1). Sensors 2012, 12 6769 t t Z Z (t) = !(T )dT = (! ^ (T ) + ! (T ) + ! (T ))dT (1) w f 0 0 (t) = (t) + (t) + (t) (2) ARW F (t) = ! (T )dT (3) ARW w (t) = ! (T ))dT (4) F f where ! ^ (t) denotes the true angular velocity, ! (t) white noise and ! (t) ﬂicker noise in angular velocity w f readouts. (t) is the true change in the heading direction. Integrated white noise ! (t) results in a ﬁrst-order random walk, called in this case angular random walk (ARW), and is denoted by (t) ARW in Equations (2) and (3). Angular random walk (t) is described by the Gaussian distribution but ARW it is not a stationary process as its standard deviation is growing with the square root of the integration time [31]. For the sensor used in the project the standard deviation of grows at a rate of 2:77 = h. ARW Figure 1. The block diagram of the system. The electronic module is a dedicated electronic circuit comprised of a 6DOF sensor, GPS receiver and microcontroller. The module is connected with the PC via USB interface. The stereovision block represents a stereo camera. Sensors 2012, 12 6770 Figure 2. A picture of the built electronic module. The black box is a 6DOF sensor—3-axial gyroscope and 3-axial accelerometer. The orange cuboid is an antenna of a GPS receiver. The module is connected with the PC through a mini-USB socket. Similarly, integrated ﬂicker noise ! (t) causes an angular error (t), according to f F Equation (4). The values of are described by the Gaussian distribution whose standard deviation grows proportionally with the time and therefore it is a non-stationary process. In case of the gyroscope used in the project, the standard deviation of grows at a rate of 48 =h. Therefore, conﬁdence intervals for estimating (t) widens with the time t. After an hour, a motionless gyro can drift by 144 for 3 conﬁdence interval. Hence, dead reckoning is justiﬁed for short periods only, e.g., during GPS outages. Inertial sensors also suffer from other errors like non-linearity or temperature random bias. The former error is 0.3 =s. The experienced random bias due to temperature was much higher than the manufacturer claimed and it reads 0.5 =s for 3 conﬁdence interval. This is, however, easy to compensate by leaving the sensor motionless and measuring the constant offset to be later subtracted from the readouts. Accelerometer. The accelerometer is used in the project to estimate the user’s steps and their lengths. This method is more precise than integrating twice the acceleration readouts to obtain the displacement. The accelerations readouts are also used to estimate the gravity direction which is necessary for correct calculation of the orientation change. By analogy, velocity random walk is a product of integration of white noise in acceleration. On average, the velocity random walk was 0.5 m=s= h. Flicker noise introduces an error of 9.3 m=s=h. The displacement errors are much larger since they are products of integrating velocity errors. Any offset in acceleration readouts grows quadratically with the time. This quickly leads to aggregation of errors and renders the analytical approach ineffective after a dozen of seconds. Sensors 2012, 12 6771 3.1.2. GPS Receiver The PCB board houses a standard GPS receiver with an integrated ceramic antenna, measuring 15 mm by 15 mm. The receiver is built on the MTK MT3329 chipset and supports WAAS, EGNOS, MSAS, GAGAN corrections. GPS readouts are based on the WGS84 coordinate system aligned in the centre of the Earth, being represented by an ellipsoid. The conversion to a local coordinate system follows in two steps. Firstly, the polar coordinates, i.e., latitude, longitude and height above the ground, are transformed to the WGS84 Cartesian coordinates (x ; y ; z ). This step, along with associated errors WGS84 WGS84 WGS84 and corrections, is described in detail in [32]. Secondly, the local coordinates (x; y; z) are obtained by simpliﬁed Helmert’s transformation (5). 2 3 02 3 1 x x WGS84 6 7 B6 7 C y = R y T (5) 4 5 @4 5 A WGS84 z z WGS84 The rotation R and translation T matrices are structured in such a way that the centre of the local coordinate system is on the Earth’s surface. Versor 1 is directed along with longitude, 1 with latitude x y (towards the pole), 1 is determined by 1 and 1 . The conversion accuracy is in an order of 10 cm for z x y 20 km from the centre of the local coordinate system. A GPS receiver provides the HDOP parameter (Horizontal Dilution of Precision) informing about the accuracy of the estimated coordinates, i.e., latitude and longitude (6). Higher values of HDOP should inform a dead reckoning ﬁlter to trust to other sources of positioning (e.g., inertial sensors) and disregard GPS readouts. 2 2 HDOP = + (6) x y The standard deviations of estimating x and y on the Earth surface are denoted by and x y respectively. Providing that x and y have the Gaussian distribution and are independent, the probability density function of distance error r is given by the Rayleigh distribution [33], which is not a monotone function of its argument. The function has a maximum for r > 0. Tests showed that the function deﬁned by Equations (7), (8) and (9) yields deﬁnitively superior results. This can be explained by a poor relationship between the HDOP parameter and the error r . Since r can assume only positive e e values, the Gaussian function p (r ; ) is accordingly scaled by a factor of 2. The coefﬁcient GPS e GPS GPS in Equation (9) is chosen by trial and error. 2 r p (r ; ) = exp (7) GPS e GPS GPS GPS 2 2 r = x ~ + y ~ (8) = HDOP (9) GPS GPS If a GPS receiver loses track of satellites, prediction of its position is based on the previous velocities and position. The ﬁrst order autocorrelation coefﬁcients for x ~ and y ~ were calculated for a 2.6 km path—see Equation (10). They had considerable values of a a 0:86. Thus x ~ and y ~ can be x y modelled by Equation (11), which is known as exponentially correlated noise. Sensors 2012, 12 6772 n1 x ~(t)x ~(t + 1) t=1 a = (10) n1 x ~(t) t=1 " # " #" # " # x ~(t) a 0 x ~(t 1) v (t) x x = + (11) y ~(t) 0 a y ~(t 1) v (t) y y 2 2 2 where v (t) and v (t) are random realizations of white noise, whereby E[v (t)] = (1 a ) and x y x x x 2 2 2 E[v (t)] = (1a ) . This is a very important point. There are periods of e.g., 30 s when GPS readouts y y y stray away by 40 m in one direction. A dead reckoning system, weighting GPS coordinates with inertial sensors’ readouts to produce a better position estimate, will succumb to this bias error sooner or later. When the GPS receiver regains good visibility with satellites, the estimated position will be all of a sudden in a quite different location. The dead reckoning ﬁlter will have to take some time to adapt to a new, accurate GPS position. During this time the ﬁlter output will be encumbered with large errors. A way out is to recognize the situation and reinitialize the ﬁlter. To account for these random bias errors another source of absolute positioning should be introduced. Maps, video data are examples thereof. Figure 3. The relation between the values of the HDOP parameter and the distance errors r (t). Another problem associated with GPS readouts is the correspondence of the HDOP parameter with the actual accuracy of latitude and longitude. Equation (6) shows that the value of HDOP is interpreted as the distance error. Figure 3 sheds light on the problem. It was noticed that the interpretation of the HDOP parameter depends on a GPS receiver at hand. The correlation coefﬁcient (12) shows a poor linear dependence between distance errors r (t) and the values of HDOP. The number of observed satellites and position error is not correlated. This conﬁrms the statement about the necessity of additional positioning data. Cov(r (t); HDOP(t)) (r (t); HDOP(t)) = p = 0:19 (12) Var(r (t))Var(HDOP(t)) e Sensors 2012, 12 6773 3.2. Pedometer When a person walks, the body undulates in unison with the steps. The step-detection algorithm examines the acceleration of the body only in the vertical direction (parallel to the gravity axis). The gravity axis is obtained from the algorithm presented in Section 3.3. The acceleration signal also reﬂects knee movements, body swings, sensor jerks and so forth. A 3 Hz 4-order low-pass Butterworth ﬁlter was applied to eliminate undesired constituents. The step detection algorithm is presented in Figure 4. The minimum A and maximum A acceleration in the ﬁltered signal are detected. The difference min max between the peaks is raised to the empirical power of 0:25. The result is multiplied by the scale factor K , depending on the individual. Equation (13) expresses an estimated step length d. Related work can be found in [10,13,34]. Step length accuracy measurements are between 1% to 10% depending on an individual and gait style. 0:25 d = (A A ) K (13) max min Figure 4. The algorithm for estimating the number of steps and their lengths. 3.3. Tilt Estimation Tilt estimation, i.e., estimation of the direction of the gravity axis, is important for two reasons: 1. Calculation of the relative change in the heading direction. When the sensor is not levelled, the gyroscope’s axis does not coincide with the pivotal axis of the pedestrian. When the user turns, the relative change in the heading direction is underestimated. For example, when the gyroscope axis is tilted by 25 from the pivotal axis, the angular velocity is underestimated by 10%. Consequently, the relative direction change might be 81 instead of 90 —c.f. Figure 5. 2. Estimation of the camera orientation with respect to the ground. The stereo camera is used to measure distance to nearest objects like buildings. When the camera is not levelled the measured distance will be overestimated. A human rotates along the gravity axis when changing the walking direction. Thus, the task of estimating the correct change in direction comes down to ﬁnding the gravity axis. The problem is known in aeronautics as Inertial Navigation Systems. The estimation of six coordinates of an aeroplane (three for orientation and three for position) is not a trivial task. A standard approach based on off-the-shelf Sensors 2012, 12 6774 sensors and cosine direction matrices is very inaccurate after few dozens of seconds [12]. However, humans move with much smaller accelerations. The dominant acceleration comes from the gravity. When a person moves, the body is usually aligned vertically. The pitch and roll are limited to 30 . These assumptions help greatly to design an algorithm estimating the gravity axis. The algorithm uses Kalman ﬁlter to integrate the readouts from the gyroscopes and accelerometers. The angles between the sensor’s axes and the gravity axis (see Figure 5) can be calculated from Equations (14) and (15), given the sensor is not moving. = arccos( ); = (14) x x x g 2 = arccos( ); = (15) y y y g 2 = arccos( ) (16) where a , a and a are accelerations measured by the 3-axial accelerometer, g is the gravity constant, x y z 9.81 m=s . When a user walks, the accelerations will ﬂuctuate by 0.25 g, depending on the walking style. The acceleration vector due to walking is treated here as measurement noise. Hence, calculating the direction of the gravity axis from Equation (16) would result in an error of 15 . The accelerations need to meet the constraint Equation (17). 2 2 2 a + a + a = g (17) x y z Figure 5. The local coordinate system of the 6DOF sensor. g denotes the gravity acceleration. ! , ! and ! are angular velocities which are clock-wise oriented with their x y z rotation axes. , and are angles between the sensor’s axes and the gravity. x y z Sensors 2012, 12 6775 This reduces the number of degrees of freedom by one. Gyroscopes can be used to improve the estimation of the sensor orientation. The transition equation of the Extended Kalman Filter (EKF) is given by Equations (18) and (19). Appearance of angular velocity ! causes the X axis of the sensor changing its orientation with respect to the gravity. This change also depends on the orientation of the Y axis and is expressed by the sin term. " # (k 1) x(k) = = f (x(k 1); u(k 1); w(k 1)) = (18) (k 1) " # (k 1) + (! (k 1) + w (k 1))dT sin (k 1) x y y y = (19) (k 1) (! (k 1) + w (k 1))dT sin (k 1) y x x x k denotes time instant, where t = k dT and 1=dT = 205 Hz. w (k) and w (k) are the gyroscope errors. y x The measurement equation of the EKF is given by Equation (20) " # " # a (k) g cos (k) v (k) x x x y(k) = = g(x(k); v(k)) = (20) a (k) g cos (k) v (k) y y y The accelerometers readouts are corrupted by noise v(k). Further implementation of this Extended Kalman Filter is straightforward and follows through Taylor linearisation. Details can be found in [35]. Having estimated and and complementary angles and , the angle between the Z -axis x y x y gyroscope and the inverted gravity axis (see Figure 6) can be calculated from Equation (21). 2 2 sin + sin x y = 2 arcsin (21) Figure 6. The relationship between the angles and and the angle which describes x y the sensor tilt and is measured between the sensor’s Z-axis and theg axis. The angular velocity can be corrected by Equation (22). The relative heading direction should be calculated from Equation (23). Sensors 2012, 12 6776 ! ^ (t) = ! (t)= cos( ) (22) z z (t) = ! ^ (t)dt (23) As mentioned before, this approach is valid for limited range of pitch and roll, due to singularity in Equation (22) when approaches =2. The measurement of can be included from Equation (16), however with a small beneﬁt due to considerable ﬂuctuations of the corresponding acceleration a . The average error of estimating was measured to be 0:5 with a standard deviation of 0:45 . Figure 7(a,b) shows a trial with the sensor tilted by15 . Figure 7. (a) Path plotted for the pedometer and gyroscope readouts. The gyroscope readouts are corrected by the gravity estimation algorithm. (b) Path with no gyroscope correction. Points (x; y) = (0; 0) denote the starting point. There were 20 turns, each 90 . The trial was started and ended in the same point. The difference between the heading direction at the beginning and at the end was 21 , which corresponds to 1 of error per every rotation. (a) (b) 3.4. Heading Estimation A gyroscope provides an angular velocity !. A direction change can be found by integrating ! (c.f., Equation (23)). Thus a gyroscope can provide only a relative orientation as opposed to a magnetic compass. Experiments with a magnetic compass showed its susceptibility to magnetic ﬁeld distortions due to tram lines, DC power lines with no return cable, cars changing a local magnetic ﬁeld, etc. The magnitude of the Earth’s magnetic ﬁeld is ca. 50 T . By comparison, a departing tram produces 120 m away a magnetic induction of 5 T . Hence, a magnetic compass was not included. The absolute pedestrian orientation is sorted out run-time by the particle ﬁltering algorithm based on other data. A magnetic compass could give a rough approximation90 until the absolute orientation has been found and then switched off. Sensors 2012, 12 6777 3.5. Probability Map The digital map is divided into three types of area. The areas have their associated weights: forbidden areas—buildings, ponds, walls, fences etc., w (x; y) = 0:0, map probable areas—lawns, ﬁelds etc., w (x; y) 2 (0; 1), map preferred areas—pavements, streets, squares, alleys etc., w (x; y) = 1:0. map w (x; y) denotes the type of an area at (x; y) coordinates. Each weight corresponds to the probability map of the user being in a given area. Since the system is destined for outdoor navigation, it is rather improbable that the user will traverse buildings, walls and fences—c.f., Figure 8. The user is likely to travel along pavements and alleys, however, he or she can also walk across lawns, although less likely. The probability map mitigates the problem of gyroscope drifts and stepmeter inaccuracy. The particle ﬁltering approach to the discussed problem will be explained in Section 3.9. Here the term particles denotes a collection of hypothetical locations of the user and the weights associated with each particle is understood as the probability of the user being at this location. When particles enter a forbidden area, they will be gradually eliminated. Particles that move along a preferred area will be preserved. Therefore, the direction will align itself and the direction drift will be reduced. The weight assigned for probable areas should not be too low, because otherwise when a user crosses a wide street, the particles’ weights will be successively reduced and consequently die out. Consequently, the user’s location will not be correctly calculated. The project was developed with a view of pedestrians, especially blind ones. Figure 8. An example of a probability map of the University Campus. Forbidden, probable and preferred areas are denoted by black, grey and white respectively. The streets at the campus are paved and have the same texture as pavements. A blind user would rather refrain from walking through lawns. Sensors 2012, 12 6778 3.6. Stereovision Camera 3.6.1. Introduction A stereovision camera [28] is comprised of two cameras which are shifted away along the X axis, c.f., Figure 9. The distance between the cameras is called the baseline of a stereo system and is denoted by B. The baseline distance makes the same scene point P to be visible in the left and right camera at coordinates x and x respectively. The difference between the x and x values is called the L R L R disparity, and will be denoted by x . The z coordinate of the observed object can be calculated from Equation (24), where f is the focal length. Bf z = (24) For the legibility sake, the stereovision system can be treated as one camera, whose simpliﬁed coordinate system is shown in Figure 9. The Z axis coincides with the optical axis of the camera model. Figure 9. The local coordinate system of the stereovision camera. F is the focus of the camera, and f is the focal length, (x; y; z) are the coordinates of a point in the camera coordinate system. This point has the coordinates (x ; y ; 0) on the projection plane P of P P P the camera and denotes the angle from the Z axis at which point (x; y; z) is visible by the camera. The coordinates of a point (x; y; z) are projected onto the camera projection plane P according to the transformation (25). 2 3 " # " # 0 0 6 7 z+f = y (25) 4 5 y 0 0 z+f z Sensors 2012, 12 6779 The depth of a point (x; y; z), denoted by d (x; y; z), is understood as the distance from the point to the projection plane and equals Equation (26). d (x; y; z) = z (26) The distance from the Y axis of the camera to a point with the (x; y; z) coordinates is denoted by d (x; y; z) and calculated from Equation (27). 2 2 d (x; y; z) = x + z (27) Let us deﬁne a function which returns the distance from the Y camera’s axis to the nearest point visible at angle from the optical axis Z , c.f. Figure 9. This function is deﬁned by Equation (28). x x d () = min(d (x; y; z)); where tan = = (28) f z The data from the stereovision camera is digitized. The (x ; y ) coordinates are quantized and p p converted to units called pixels. The transformed coordinates are denoted by (x ; y ). The conversion pix pix follows through Equation (29). " # " # pix x Round( x ) pix p = (29) pix y Round( y ) pix p where the Round() operator denotes rounding to a nearest integer number, f is the focal length of pix the camera expressed in pixels and is provided by the manufacturer in the technical speciﬁcation. The difference between the maximal and minimal value of x is called the horizontal resolution of the pix camera and is denoted by x . The vertical resolution y is deﬁned by analogy. A depth map or a depth res res picture L is a two dimensional map deﬁned by Equation (30). (25);(29) L(x ; y ) = min(d (x; y; z)); (x; y) ! (x ; y ) (30) pix pix p pix pix The coordinates (x; y) are related with (x ; y ) through transformations (25) and (29). pix pix Equation (30) formally says that a stereo camera provides the depth to nearest points that are not occluded. This is so-called 2.5 dimension image. Usually, the depth for a given point (x ; y ) is pix pix represented by a ﬂoating point-number. Using the identity (24), the disparity map K can be deﬁned by Equation (31). Bf K(x ; y ) = (31) pix pix L(x ; y ) pix pix As a matter of fact, stereovision cameras ﬁrst determine the disparity map K and then calculate the depth map L. The disparity x is calculated in digital cameras in pixels, thus K should be a map containing integer numbers. An example of a depth map is presented in Figure 12(b). Advanced stereo cameras perform subpixel interpolation, e.g., the camera used in the project calculates the disparity with the resolution of 0.25 pixel and then performs ﬁltering and interpolation on the depth map. Sensors 2012, 12 6780 3.6.2. Error Analysis As the object recedes from the camera, the disparity x decreases. An error in calculating the shift x becomes more signiﬁcant. Calculating the derivative of z with respect to x in Equation (24) gives d d Equation (32). The accuracy of determining the distance z decreases with its squared value. As the stereo-camera is digital, the values of x , x are quantized and so is x . It means that z manifests L R d discontinuities as x changes to an adjacent quantum. Let denote the size of the pixel in metrical units. For the sake of simplicity, x will have units of metres or pixels, whereby the appropriate unit will be clear from the context or mentioned implicitly. Let us assume at this stage that the error of determining pixel x can be or . Figure 10 exempliﬁes the effect of such quantization. 2 2 Bf z z x x (32) d d x Bf Figure 10. An illustration of the quantization effect on estimating the z coordinate. (a) The plot of z and errors thereof as a function of disparity x . Dots symbolize the depth z calculated from z = Bf=x . Bars symbolizes errors of z when x is encumbered with an d d pixel error of or , (b) The absolute error of z as a function of the depth z. 2 2 The error of determining the disparity does not only come from quantizing x , in which case, the disparity maps presented in Section 3.8 would be devoid of any artifacts or falsely calculated disparities—see [36] for full clariﬁcation. Sensors 2012, 12 6781 Figure 11. (a) A plot of f (zjz) for different values of z. As z increases the conditional probability density function widens. (b) A 3D plot of f . For illustration purposes, was z d assumed 6 pix, whereby, to account for the units, f = 279 pix was substituted. The camera used in this work calculates disparity by correlating small blocks (e.g., 7 7 pixels) and looking for best match. The probability density function of the disparity error is disputable in this case. Let it be the Gaussian distribution, expressed by Equation (33), where x denotes the disparity error. 1 x f (x ) = p exp (33) d d d d The probability density function of z can be calculated from Equation (34). The function f is conditioned on z, i.e., the error z depends on the value of z as mentioned before. @(x ) f (zjz) = f (x ) (34) z d d @(z) Equation (32) holds for small x . The exact value of z can be computed from Equation (35). The formula necessary for the transformation is given by Equation (36). Bf z = z (35) x + x d d @(x ) Bf = (36) @(z) (z + z) Thus, f can be calculated from Equation (37). Bf 1 (Bf z) f (zjz) = p exp (37) 2 2 (z + z) 2 (z + z) The shape of the function f (zjz) for different values of z is presented in Figure 11. The function is skewed as one should have suspected and decays slower for z greater than 0. Sensors 2012, 12 6782 3.7. 3D Map of an Urban Terrain A precise 3D map of an urban terrain was provided by the local cartographic ofﬁce of Lodz (MODGiK). The map is divided in layers containing: roads, buildings, entrances to buildings, steps, fences, lawns, trees, bushes, ponds, lamp posts and many other data not used in the project. The accuracy of building deployment is in an order of single centimetres and the buildings’ heights were measured with a bit worse accuracy. The radius of trees’ crowns was assumed 3 m and the radius of lamp posts 0.5 m. 3.8. 3D Matching Algorithm Estimated location based on the stepmeter, gyroscope and digital maps is accurate to within 15 m (a street with pavements on both sides) [3]. GPS errors in urban canyons being a several dozens of metres, practically both sides of a road are equally probable. Figure 12. (a) TV camera image. (b) Disparity map from the stereo camera. Darker colours correspond to smaller disparity, thus bigger distance. (c) 3D model of the corresponding urban environment. (a) (b) (c) Sensors 2012, 12 6783 Figures 12(a–c) show a picture from a monocamera, a disparity map K and a part of the 3D model of the University Campus. 0 0 0 0 T Let us use the local coordinate system deﬁned in Section 3.1.2. Let the vector [x ; y ; z ; ] denote S S S S the Cartesian coordinates and the orientation of the stereo camera in the local coordinate system. is called the azimuth of the stereo camera and it has the same interpretation as with a magnetic compass. It is assumed that the stereovision camera is worn by a pedestrian and it is attached to the chest, say at z = h = 1:5 m above the ground. The optical axis of the camera is directed roughly parallel to the ground plane. The positioning algorithm has a precise 3D model of the environment that is deﬁned 0 0 0 for the local coordinate system—see Figure 12(c). Let us put a virtual camera in the (x ; y ; z ) V V V coordinates and set its orientation to . Then, let us render the virtual environment based on the 3D model. The virtual camera should see the picture presented in Figure 12(c). On the other hand, the stereovision camera provides a depth image of the environment. Both images can be compared. Let L and L denote the depth image returned by the stereovision camera and retrieved from the virtual S V environment, correspondingly. f (L ; L ) is some cost function comparing the two depth images and L S V returning comparison error. The function is deﬁned by Equation (38). f : (L; L) ! R (38) 0 0 0 0 T Formally, the problem comes down to ﬁnding the vector [x ; y ; z ; ] where the function V V V V f (L ; L ) returns a minimum value, that is, the difference jjL L jj is minimal according to some L S V S V criterion. Then the estimated coordinates of the stereo camera (that is the pedestrian location) are given by Equation (39). This task is handled by the particle ﬁltering algorithm. 0 0 0 T 0 0 0 0 T [x ; y ; z ; ] = [x ; y ; z ; ] (39) S S S V V V V Assuming z = z = h reduces the dimension of the problem. The function f is still not trivial. S V L The depth picture L is encumbered with many errors, see Figure 12(b). For example, sun illumination causes some parts of the depth map to have falsely determined values (see Figure 13). The function f should be immune to these undesirable effects that can cause random bias errors, most troublesome errors to correct by any ﬁlter, including a particle ﬁlter which is used in the presented project. A random bias error is understood as an error whose expected value differs considerably from 0 for a signiﬁcant period of time, e.g., 30 seconds. There can be also objects that can compromise the comparison between L and L and cannot be ﬁltered by any methods, i.e., parked cars, trucks, bushes, posters, garbage bins, etc. Their inﬂuence can be minimized to some extent by excluding from comparison that part of L which is below the optical axis of the camera. Hence the ground surface is also excluded from comparison. First L is analysed to seek fragments for comparison. The method of analysing L depends on many S S empirical factors that were veriﬁed during many trials. The idea is following: 1. Reject the part of the depth map L that is below the optical axis of the camera. 2. Divide vertically the upper part of L into n = 5 equal regions S , j = 1 : : : n —see S S j S Figure 13(a). This forces to compare the depth maps L and L in different regions, which, S V to some extend, decorrelate random bias errors. 3. for j = 1 to n S Sensors 2012, 12 6784 (a) From the area S , pick at random a small square window W of W by W measurements, j j W W where W = 24 pix. (b) Check if W is devoid of artifacts like sun illumination, falsely determined disparity etc.—c.f. Figure 13(b). (c) Calculate the angle at which the camera sees the object pointed by W (compare with the j j angle in Figure 9). (d) Calculate the distance d to an object encompassed by W . Sj j 0 0 0 0 T (e) Check what the virtual camera sees at the angle from coordinates [x ; y ; z ; ] . V V V V Calculate in this direction the distance to the nearest object d . V j (f) Compare the distances d and d by the distance error function f . Sj V j Lj 4. Return the comparison error between L and L as f (L ; L ) = f f f f f . S V L S V L1 L2 L3 L4 L5 Figure 13. (a) An example of a disparity map K. Darker colours correspond to smaller values of disparity, thus larger distances. Example windows for each region S (j = 1 : : : 5) are denoted by W through W . One can see that for windows W and W the depth 1 5 2 3 map was determined correctly and these windows should be compared with corresponding windows in the L depth map. (b) The disparity values x for each window W as a function V d j of the x coordinate in the disparity picture. For W , W and W windows, the disparity pix 1 4 5 x changes from 0 pix to 2 pix which corresponds to distances 1 to 16 m. These windows should be rejected from further analysis. (b) (a) Ad. 1, 2. The areas S are shown in Figure 13(a). Ad. 3a. The window W is picked at random from the corresponding area S , according to j j Figure 13(a). Sensors 2012, 12 6785 Ad. 3b. By trial and error, the empirical measure (40) was proposed to check if the disparity map for max min W is devoid of artifacts. x (41) and x (42) are the maximum and minimum values of disparities d d for a given window W max min x (W ) x (W ) j j d d k = (40) max x (W ) max x (W ) = max(K(x ; y ));8(x ; y ) 2 W (41) j pix pix pix pix j min x (W ) = min(K(x ; y ));8(x ; y ) 2 W (42) j pix pix pix pix j Window W is stated to be devoid of artifacts if k > k , where an empirical value of k 0:8. j j thr thr Ad. 3c. The angle at which the stereovision camera sees the object encompassed by the W j j (c) window can be calculated from Equation (43), where x denotes the x coordinate of the centre of W . (c) = arctan (43) pix Ad. 3d. The distance d to the object pointed by W can be calculated from Equation (44). This Sj j is an arithmetical average of depths divided by the cos factor to obtain the Euclidean distance—c.f., Equations (26) and (27) that show the difference between the depth of a point and distance to the camera’s Y axis. XX 1 1 d ( ) = L (x ; y ); (x ; y ) 2 W (44) Sj j S pix pix pix pix j cos W x y pix pix Ad. 3e. Given the coordinates and orientation of a virtual camera in the 3D environment, the d V j distance from the virtual camera to the nearest object at angle can be easily obtained. Ad. 3f. The corresponding distances d and d are compared by the error function f . For Sj V j Lj the integrity sake of Equation (47), the function f should return 1 if window W was rejected from Lj j comparison, i.e., k k . The function f reﬂects the correspondence between d and d , whereby j thr Lj Sj V j the comparison should be more tolerant for greater distances, according to formula (32)—the accuracy of determining d drops with its square value. Following the reasoning in the previous chapter, the function Sj f , expressed by Equation (45), was proposed. The parameter a introduces some tolerance in case of Lj unexpected objects registered by the stereo camera (e.g., cars, humans), c.f. Figure 14. Experimentally, a 2< 0:95; 0:98 > gives desired results. 1 k k j thr f (k ; d ; d ) = (45) Lj j Sj V j 2 (Bf z) a + (1 a) exp k > k 2 2 j thr 2 (d +z) Lj z = d d (46) Sj Lj Ad. 4. Finally, the function f comparing the two environments, i.e., real and virtual, can be deﬁned by Equation (47). f (L ; L ) = f (k ; d ; d ) (47) L S V Lj j Sj V j j=1 The 3D match algorithm is evoked 10 times a second. Sensors 2012, 12 6786 Figure 14. (a) An example of the f function from Equation (45) where k > k . Example Lj j thr values of a = 0:5 and = 6 pix were used (to account for the units, f = 279pix was substituted). (b) A 3D plot of f function where k > k and z = d d . The base Lj j thr Sj Lj of the function widens as the compared distances increase. 3.9. Particle Filtering 3.9.1. Introduction The particle ﬁlter is a sequential version of the Monte Carlo method [37] which enables to ﬁnd solutions of a multidimensional problem by generating a large number of possible solutions, so-called particles, and then verifying these solutions by a given criterion, i.e., an error function. The particle ﬁlter can be looked at from the statistical point of view, whereby the solution is represented by a probability density function. The particle ﬁlter improves upon Kalman ﬁlter in case of multi-modal distributions and non-linear dynamics at the cost of high computation burden. Let i denote the number of a particle and L the total number of particles, thus i 2< 1; L >. A particle at a given time instant k is represented by vector c (k) given by Equation (48). " # x (k) c (k) = (48) w (k) where x (k) is a possible system state at time instant k and w (k) is the weight associated with i-th i i particle. A possible system state is understood as a possible solution of the problem in question. In our case x (k) represents a hypothetical user location and orientation, x (k) = [x ; y ; ] , i i i i i x ; y denote coordinates in the local coordinate system deﬁned in Section 3.1.2. is the azimuth i i i orientation. The aim of this simulation is to to ﬁnd the most probable user location. Given the series Sensors 2012, 12 6787 of measurements y(1) : : : y(k) the pedestrian’s location is given by the probability density function that can be approximated by the probability mass function (49). p(x(k)jy(1 : k)) (x x (k)) w (k) (49) i i i=1 where () is the Dirac delta function. Weights w (k) of all generated particles should sum up to unity. 3.9.2. Implementation of the Particle Filtering Algorithm The implementation of the particle ﬁltering algorithm for processing the positioning data is as follows: 1. Initialization. At the algorithm outset for k=1, all the particle states x (1) are randomly initialized according to a given distribution. The weights w (1) are assigned equal values of . 2. Prediction. Based on the transition Equation (50), new particle states are predicted. x (k) = f (x (k 1); u(k 1); v (k 1)) (50) i i i u(k 1) is a driving vector, v (k 1) is a noise vector introduced to the state due to the error of u(k 1), where each particle is perturbed with an individually generated vector v (k 1), f () is a transition function that calculates a new state based on the previous one, driving signals and errors thereof. 3. Measurement update. Each measurement y(k) updates the weights of the particles by Equation (51). w (k) = w (k 1) p(y(k)jx (k)) (51) i i i p(y(k)jx (k) is a conditional probability density of measuring y(k), given the particle state x (k). i i In other words, p(y(k)jx (k) characterizes error distribution of a given data source. Particles that diverge in the long run from measurements will assume small weights w (k). 4. Weights’ normalization. The weights are normalized so that they sum up to 1—Equation (52). w (k) w (k) := (52) w (k) i=1 5. State estimation. The most probable system state is estimated, e.g., by the ﬁrst moment (53): x(k) = x (k) w (k) (53) i i i=1 6. Resampling. After a number of algorithm iterations, all but a few particles have negligible weights and therefore do not participate in the simulation effectively. This situation is detected by Sensors 2012, 12 6788 calculating the so-called degeneration indicator, expressed by Equation (54), which is an inverse of the second moment reﬂecting the dispersion of the weights. d(k) = (54) L w (k) i=1 If all particles have the same weights of L then d(k) reaches its maximum value of 1. As the weights start to differ, the d(k) indicator decreases. If d(k) falls below a given threshold, then a process called resampling is introduced and a new set of particles is created. The probability of copying a particle to the new set is proportional to its weight w (k). Therefore particles that poorly approximate the system state are superseded by more “accurate” particles. Then all particles are assigned the same weight L . 7. Go to point 2. 3.9.3. Implementation Ad. 1. The particles can be initialized, e.g., around a ﬁrst GPS readout. Ad. 2. Based on Figure 15, the transition equation for a particle i reads (55). 2 3 2 3 2 3 x (k) x (k 1) (d(k) + v (k)) sin( (k 1) (! (k) + w (k))dT ) i i i i z i 6 7 6 7 6 7 = + (55) 4 y (k) 5 4 y (k 1) 5 4 (d(k) + v (k)) cos( (k 1) (! (k) + w (k))dT ) 5 i i i i z i (k) (k 1) (! (k) + w (k))dT i i z i Figure 15. An explanation of the unperturbed state transition: d(k) is the length of a step estimated by the pedometer. (k) is the azimuth orientation. ! (k) is the angular velocity around the gravity axis. ! is estimated by the gyroscope and the gravity estimation algorithm. (x(k 1); y(k 1)), (x(k); y(k)) are the coordinates in the previous and current time instant. v (k) and w (k) are random realizations of the pedometer and gyroscope noise respectively. Each i i particle is perturbed individually. k is the time instant, where t = k dT . Ad. 3. The following sources are used to update the particle weights: Sensors 2012, 12 6789 GPS. The measurement update for the GPS readouts are given by Equations (56) through (58) based on the Equation (7) w (k) = w (k 1) p (r ; ) (56) i i GPS i GPS 2 2 r = ((x (k) x (k)) + (y (k) y (k)) ) (57) i GPS i GPS i = HDOP (58) GPS GPS is a coefﬁcient which enables to weight GPS readouts with appropriate importance. Larger GPS values of makes the p () function more tolerant to GPS errors. GPS GPS w (k) = w (k 1) w (x (k); y (k)) (59) i i map i i comparison between the 3D virtual environment model and depth picture retrieved from the stereo camera. The measurement update equation is given by Equation (60). w (k) = w (k 1) f (L ; L ) (60) i i L;i S V;i For each particle, a virtual environment is generated individually. The coordinates of the virtual camera are set to [x ; y ; h; ]. h is the height at which the stereo camera is attached. Then i i i the depth map L is generated. The function f (L ; L ) is evoked for every particle. This is a V;i L S V;i computation demanding procedure. Note that, if an unexpected object appears in front of the stereo 10=s10s camera for 10 seconds, given particles’ weights will be decreased by a , c.f. Equation (45) (3D matching algorithm is run 10 times a second). This might introduce a large error in the position estimation. Bigger values of a diminish the beneﬁt of using a stereo camera and 3D model of the urban terrain. 4. Trials The trials were carried out at the campus of Lodz University of Technology, in an area of ca. 500 m by 500 m. The length of the trial path was ca. 2.6 km. The reference path was restored off-line by analysing images recorded by the camera. The average error of determining the reference path was ca. 0.5 m on average. The maximum error might have been 2–3 m. The delineated path is composed of straight lines. The location error is calculated as the distance between the appropriate path segment and the estimated location by a GPS receiver or the particle ﬁlter. Thus, actual positioning errors are larger. On the other hand, GPS errors assume dominant values in the direction perpendicular to a street. In the direction parallel to a street there are no buildings to occlude the sky. Hence, the signal from GPS satellites is not occluded, which results in a small error in the direction along a street [2]. 5. Results The results are presented by plots of paths, a histogram and a cumulated histogram of errors and a table that summarizes the errors. An interesting question is how much a given data source improves the accuracy of the positioning. Table 1 provides the answer. Therefore, several off-line tests were run with different combination of data sources. It was though assumed that the starting location is known. Sensors 2012, 12 6790 Otherwise, a bigger number of particles would have been necessary to ﬁnd a ﬁrst approximate location. Once, a rough location is found the ﬁlter is convergent. The initial errors might have been large and therefore eclipse the key point of measurements. On one side, this assumption might be justiﬁed. A pedestrian, e.g., a blind user goes out from his home, so he knows his location. On the other hand, a blind pedestrian might use the system when lost. The aforementioned distance error r is used as a comparison criterion. The results are evaluated by the following measures: 1. Root-mean-square value, denoted by RMS , 2. Mean error, denoted by , 3. CEP50—Circle of Error Probability 50%—the radius of a circle encompassing 50% of errors, i.e., a median value, 4. CEP90—the radius of a circle encompassing 90% of errors, i.e., ninth decile, 5. CEP95—the radius of a circle encompassing 95% of errors, i.e., 95th percentile, 6. CEP99—the radius of a circle encompassing 99% of errors, i.e., 99th percentile, 7. Maximal error—the maximum value of an error recorded during a trial, denoted by max(r ) Table 1. Results of the trial at the University Campus. Seven setups of data sources were simulated. The sign ‘+’ in a column denotes that a given data source was included in the simulation. Otherwise the data source is not used. Setup number 4 was carried out with all data sources on. The results for this setup are best. Setup 1 2 3 4 5 6 7 GPS + + + + Inertialsensors + + + + + + Probability map + + + + 3D matching algorithm + + RMS [m] 5.23 4.96 2.10 1.26 1.65 4.39 107.5 [m] 3.27 3.45 1.56 0.87 0.95 2.94 86.4 CEP50 [m] 2.19 2.47 1.24 0.57 0.55 1.89 76.4 CEP90 [m] 6.54 6.79 3.39 2.03 2.13 6.92 182 CEP95 [m] 10.7 12.0 4.28 2.90 3.19 9.09 210 CEP99 [m] 23.3 18.2 6.41 4.50 7.02 16.0 261 max(r ) [m] 30.6 20.6 8.64 5.24 12.8 25.5 265 Figure 16 17 18 19 20 21 22 Sensors 2012, 12 6791 Figure 16. (a) Red dashed line—reference path. Blue solid line—estimated path with a help of a GPS receiver. In the presence of buildings, the GPS readouts are30 m inaccurate. The errors depends also on the satellites conﬁguration against a building. This can be noticed for coordinates (220 m, 80 m). The same place was revisited after a couple of minutes and the positioning error was very different. (b) Histograms of distance errors. (a) (b) Sensors 2012, 12 6792 Figure 17. (a) Red dashed line—reference path. Blue solid line—estimated path with a help of GPS receiver and inertial sensors. The readouts from the inertial sensors enable to eliminate big errors. The accuracy improvement is not signiﬁcant due to errors in estimating direction. (b) Histograms of distance errors. (a) (b) Sensors 2012, 12 6793 Figure 18. (a) Red dashed line—reference path. Blue solid line—estimated path with a help of GPS receiver, inertial sensors, probability map. The probability map eliminates direction errors, by pruning particles that diverge from a reference direction. Therefore errors are instantly eliminated and do not propagate. The positioning errors are then described by random walk with bounds. (b) Histograms of distance errors. (a) (b) Sensors 2012, 12 6794 Figure 19. (a) Red dashed line—reference path. Blue solid line—estimated path with a help of GPS receiver, inertial sensors, probability map, stereo camera and 3D model of the environment. The positioning is quite precise. The errors are mainly introduced by the error in estimating the heading direction. (b) Histograms of distance errors. (a) (b) Sensors 2012, 12 6795 Figure 20. (a) Red dashed line—reference path. Blue solid line—estimated path with a help of inertial sensors, probability map, stereo camera and 3D model of the environment. When the initial position is known, the ﬁlter can estimate the pedestrian location with smaller errors than the GPS receiver. (b) Histograms of distance errors. (a) (b) Sensors 2012, 12 6796 Figure 21. (a) Red dashed line—reference path. Blue solid line—estimated path with a help of GPS receiver, inertial sensors, probability map, stereo camera and 3D model of the environment. The error of estimating direction at (400 m, 120 m) introduces big errors later on. The ﬁlter managed to recover at (240 m, 80 m). (b) Histograms of distance errors. (a) (b) Sensors 2012, 12 6797 Figure 22. (a) Red dashed line—reference path. Blue solid line—estimated path with a help of the inertial sensors, i.e., gyroscope for estimating direction change and accelerometers for counting steps and their lengths. As times goes on, the errors accumulate due to angular random walk. (b) Histograms of distance errors. (a) (b) Sensors 2012, 12 6798 6. Conclusions The results were calculated based on ca. 1,900 GPS readouts, 3,400 particle ﬁlter estimates and 19,000 pictures from the stereo camera. The particle ﬁlter using all available data sources provides by far the best accuracy. For 99% of the time the accuracy is better than 4.5 m. This is still too large an error for safe navigation of visually impaired pedestrians. The main source of errors were introduced by misestimated direction, which is visible in Figure 22 at coordinates (410 m, 125 m). The turn is misestimated by 20 . This error is too large for the non-linearity error mentioned in Section 3.1.1 or any other gyroscope error. Possibly the sensors came adrift and did not change its direction accordingly. This error, when not corrected immediately, has a ripple effect. This can be, however, taken care of. Also, more accurate sensors are available on the market. It is surmised that an error of 3 m for 99% of time is achievable. It is interesting to note that when the initial position is known, the location of a pedestrian can be estimated without having to use a GPS receiver. The setup number 5 (see Table 1) gives best results after the setup number 4 and it is better than sole GPS readouts. Needless to say, this approach can be used in limited areas, e.g., inside buildings where GPS readouts are not available. The probability map concept limits the positioning error to the width of a street, ca. 10 m. Due to the imposed constraints on the pedestrian kinematics, the error dropped 3 times in terms of a root-mean-square error, which poses the best improvement from all other additional sources. As mentioned before, on the bases of GPS readouts and the probability map, a user’s location can be estimated within ca. 10 m. A stereo camera can reﬁne the user’s position down to 2 m, providing there are no unexpected objects, or difﬁcult objects like trees, whose crown’s radius could differ from the assumed 2 m. GPS readouts complement nicely with the algorithm in which the depth images derived from terrain model and stereoscopy are compared. In the presence of high buildings, GPS readouts are compromised. Under these circumstances the image matching algorithm performs well as the walls of buildings are conﬁdently detectable by a stereo camera. On the other hand, in an open space GPS readouts are accurate down to, ca. 2–3 m. Then in turn, the image matching algorithm has no reference points to perform a viable comparison. The presented algorithm works off-line. It is quite computationally demanding. The particle ﬁltering algorithm requires generation of ca. 500 particles for good positioning accuracy. However, the task can be handled on-line without any optimization techniques by a dual core, 2.5 GHz standard notebook. Acknowledgements The project is ﬁnanced by the National Centre of Research and Development of Poland in years 2010–2013 under the grant NR02-0083-10. References and Notes 1. Ofﬁcial U.S. Information Website about the GPS System. Available online: http://www.gps.gov (accessed on 14 February 2012). Sensors 2012, 12 6799 2. Modsching, M.; Kramer, R.; Hagen, K. Field Trial on GPS Accuracy in a Medium Size City: The Inﬂuence of Built-up. In Proceedings of the 3rd Workshop on Positioning, Navigation and Communication, Hannover, Germany, 16 March 2006; pp. 209-121. 3. Baranski, P.; Strumillo, P. Fusion of Data From Inertial Sensors, Raster Maps and GPS for Estimation of Pedestrian Geographic Location in Urban Terrain. Metrol. Measure. Syst. 2011, 1, 145-158. 4. Ong, R.B.; Petovello, M.G.; Lachapelle, G. Assessment of GPS/GLONASS RTK under Various Operational Conditions. In Proceedings of the 22nd International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS 2009), Savannah, GA, USA, 22–25 September 2009; pp. 3297-3308. 5. Trajkowski, K. K.; Sterle, O.; Stopar, B. Sturdy Positioning with High Sensitivity GPS Sensors under Adverse Conditions. Sensors 2010, 10, 8332-8347. 6. Diggelen, F. Generating Assisted Data. In A-GPS: Assisted GPS, GNSS, and SBAS; Artech House: Boston, MA, USA, 2009. 7. Krakiwsky, E.; Harris, C.; Wong, R. A Kalman Filter for Integrating Dead Reckoning, Map Matching and GPS Positioning. In Proceedings of Position Location and Navigation Symposium, Navigation into the 21st Century, Orlando, FL, USA, 29 November–2 December 1988; pp. 39-46. 8. Taghipour, S.; Meybodi, M.; Taghipour, A. An Algorithm for Map Matching for Car Navigation System. In Proceedings of the 3rd International Conference on Information and Communication Technologies From Theory to Applications, Damascus, Syria, 7–11 April 2008; pp. 1-5. 9. Mezentsev, O.; Lachapelle G. Pedestrian Dead Reckoning—A Solution to Navigation in GPS Signal Degraded Areas. Geomatica 2005, 59, 145-182. 10. Fang, L.; Panos, J. Design of a Wireless Assisted Pedestrian Dead Reckoning System—The NavMote EXperience. IEEE Trans. Instrume. Measur. 2005, 50, 2342-2359. 11. van der Spek, S.; van Schaick, J.; de Bois, P.; de Haan, R. Sensing Human Activity: GPS Tracking. Sensors 2009, 9, 3033-3055. 12. Woodman, O. An Introduction to Inertial Navigation; Technical Report Number 696; University of Cambridge: Cambridge, UK, 2007. Available oneline: http://www.cl.cam.ac.uk/techreports/ (accessed on 1 February 2012). 13. Jirawimut, R.; Ptasinski, P.; Garaj, V. A Method for Dead Reckoning Parameter Correction in Pedestrian Navigation System. IEEE Trans. Instrum. Measur. 2003, 52, 209-215. 14. Godha, S.; Lachapelle, G.; Cannon, M. Integrated GPS/INS System for Pedestrian Navigation in a Signal Degraded Environment. In Proceedings of the 19th International Technical Meeting (ION GNSS 2006), Fort Worth, TX, USA, 26–29 September 2006; pp. 2151-2164. 15. Ladetto, Q.; Merminod, B. An Alternative Approach to Vision Techniques: Pedestrian Navigation System Based on Digital Magnetic Compass and Gyroscope Integration. In Proceedings of the 6th World Multi-Conference on Systemics, Cybernetics and Informatics, Orlando, FL, USA, 14–18 July 2002; pp. 145-150. 16. Seung-Hyun, K. Statistical Analysis of Urban GPS Multipaths and Pseudo-Range Measurement Errors. IEEE Trans. Aerospace Electr. Syst. 2011, 47, 1101-1113. Sensors 2012, 12 6800 17. Bancroft, J.B.; Lachapelle, G. Data Fusion Algorithms for Multiple Inertial Measurement Units. Sensors 2011, 11, 6771-6798. 18. Jirawimut, R.; Shah, M. A.; Ptasinski, P.; Cecelja, F.; Balachandran, W. Integrated DGPS and Dead Reckoning for A Pedestrian Navigation System in Signal Blocked Environments. In Proceedings of the 13th International Technical Meeting of the Satellite Division of The Institute of Navigation, Salt Lake City, UT, USA, 19–22 September 2000; pp. 1741-1747. 19. Williamson, J.; Strachan, S.; Murray-Smith R. It’s a Long Way to Monte Carlo: Probabilistic Display in GPS Navigation. In Proceedings of the 8th Conference on Human-Computer Interaction with Mobile Devices and Services, Espoo, Finland, 12–15 September 2006; pp. 89-96. 20. Ceranka, S.; Niedzwiecki, M. Application of Particle Filtering in Navigation System for Blind. In Proceedings of Signal Processing and its Applications, Paris, France, 1–4 July 2003; pp. 495-498. 21. Widyawan; Klepal, M.; Beauregard, S. A Backtracking Particle Filter for Fusing Building Plans with PDR Displacement Estimates. In Proceedings of Positioning, Navigation and Communication, Hannover, Germany, 27 March 2008; pp. 207-212. 22. Hesch, J.A.; Roumeliotis, S.I. An Indoor Localization Aid for the Visually Impaired. In 2007 IEEE International Conference on Robotics and Automation (ICRA’07), Roma, Italy, 10–14 April 2007; pp. 3545-3551. 23. Girard, G.; Cote, S.; Zlatanova, S.; Barette, Y.; St-Pierre, J.; Van Oosterom, P. Indoor Pedestrian Navigation Using Foot-Mounted IMU and Portable Ultrasound Range Sensors. Sensors 2011, 11, 7606-7624. 24. Thrun, S.; Montemerlo, M. FastSLAM. In A Scalable Method for the Simultaneous Localization and Mapping Problem in Robotics; Siciliano, B.; Khatib, O., Groen, F., Eds.; Springer: Berlin, Heidelberg, Germany, 2007. 25. Kleiner, A.; Dali, S. Decentralized SLAM for Pedestrians without Direct Communication. In Proceedings of the International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 1461-1466. 26. Cecelja, F.; Balachandran, W.; Jirawimut, R. A Stereo Vision System for Pedestrian Navigation. In Proceedings of the 17th International Conference on Applied Electromagnetics and Communications, Dubrovnik, Croatia, 1–3 October 2003; pp. 30-33. 27. Burguera, A.; Gonzlez, Y.; Oliver, G. Sonar Sensor Models and Their Application to Mobile Robot Localization. Sensors 2009, 9, 10217-10243. 28. Brown, M.Z., Burschka, D., Hager, G.D. Advances in Computational Stereo. IEEE Trans. Patt. Anal. Mach. Intell. 2003, 25, 9931008. 29. Szeliski, R.; Scharstein, D. Symmetric Sub-Pixel Stereo Matching. In Proceedings of the 7th European Conference on Computer Vision, Copenhagen, Denmark, 27 May–2 June 2002; pp. 525-540. 30. Howe, D.A.; Allan, D.W.; Barnes, J.A. Properties of Signal Sources and Measurement Methods. In Proceedings of the 35th Annual Symposium on Frequency Control, Philadelphia, PA, USA, 27–29 May 1981. 31. IEEE Standard for Inertial Sensor Terminology; IEEE Std 528-2001 (Revision of IEEE Std 528-1994); The Institute of Electrical and Electronics Engineers, Inc.: New York, NY, USA, 2001. Sensors 2012, 12 6801 32. International Hydrographic Organization. User’s Handbook On Datum Transformations Involving WGS 84. Available online: www.iho.int/iho pubs/standard/S60 Ed3Eng.pdf (accessed on 10 February 2012). 33. Brown, R., Hwang, P. Transformation of Random Variables. In Introduction to Random Signals and Applied Kalman Filtering; John Willey & Sons: New York, NY, USA, 1997; pp. 42-49. 34. Park, S.A.; Suh, Y.S. A Zero Velocity Detection Algorithm Using Inertial Sensors for Pedestrian Navigation Systems. Sensors 2010, 10, 9163-9178. 35. Anderson, B.D.O.; Moore, J.B. Optimal Filtering; Prentice Hall: Englewood Cliffs, NJ, USA, 36. Rodriguez, J.; Aggarwal, J. Quantization Error in Stereo Imaging. In Proceedings of Computer Society Conference on Computer Vision and Pattern Recognition, Ann Arbor, MI, USA, 5–9 June 1988; pp. 153-158. 37. Cappe, O.; Godsill, S.J.; Moulines, E. An Overview of Existing Methods and Recent Advances in Sequential Monte Carlo. Proc. IEEE 2007, 95, 899-924. 2012 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/.)
Sensors – Multidisciplinary Digital Publishing Institute
Published: May 25, 2012
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.