SLAM Techniques and Algorithms
SLAM Techniques and Algorithms
DefenceResearchand DevelopmentCanada
Rechercheetdveloppement pourladfenseCanada
Canada
Goals
What will we learn Gain an appreciation for what SLAM is and can accomplish Understand the underlying theory behind SLAM Understand the terminology and fundamental building blocks of SLAM algorithms Appreciate the deciencies of SLAM and SLAM algorithms Wont focus on the math, but the concept Online non-linear feature based SLAM with unknown data association
DefenceR&DCanadaR&DpourladfenseCanada
Outline
What is SLAM? Probabilistic basis of SLAM EKF SLAM Data Association Closing the Loop FastSLAM Sub-Mapping SLAM Resources Questions?
DefenceR&DCanadaR&DpourladfenseCanada
What is SLAM?
Simultaneous Localization and Mapping Given an unknown environment and vehicle pose:
Move through the environment Estimate the robot pose Generate a map of environmental features
Use the robot pose estimate to improve the map landmark position estimates Use the landmark estimates to improve the robot pose estimate
DefenceR&DCanadaR&DpourladfenseCanada
Robot motion models arent accurate Wheel odometry error is cumulative IMU sensors errors Maps are generated from sensors on the vehicle. If we cant accurately predict the robot pose then how can we produce an accurate map. GPS can help but is not always available or reliable.
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
Probability Theory gives us a framework for dealing with these sources of error The online SLAM theory is: p (Vk +1 , Mk +1 |zk , uk ) Estimate the Joint Probability of Vk +1 and Mk +1 conditioned on zk , uk (Joint Conditional PDF)
DefenceR&DCanadaR&DpourladfenseCanada
Conditional Probability p (x |y ) =
p ( x ,y ) p (y )
Product Rule p (x , y ) = p (x |y )p (y )
|x )p (x ) Bayes Rule p (x |y ) = p(yp or (y ) p (x |y ) = p (y |x )p (x ) Gaussian PDF 1 p (x ) = det (2 ) 2 exp 1 (x x )T 1 (x x ) 2
DefenceR&DCanadaR&DpourladfenseCanada
p (Vk +1 , Mk +1 |zk +1 , uk +1 ) =
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
p (Vk +1 , Mk +1 |zk +1 , uk +1 ) = p (zk +1 |Vk +1 , Mk +1 )p (Vk +1 , Mk +1 |zk , uk +1 ) (Bayes Filter) = p (zk +1 |Vk +1 , Mk +1 ) p (Vk +1 , Mk +1 |Vk , zk , uk +1 )p (Vk |zk , uk +1 )dvk
DefenceR&DCanadaR&DpourladfenseCanada
p (Vk +1 , Mk +1 |zk +1 , uk +1 ) = p (zk +1 |Vk +1 , Mk +1 )p (Vk +1 , Mk +1 |zk , uk +1 ) (Bayes Filter) = p (zk +1 |Vk +1 , Mk +1 ) p (Vk +1 , Mk +1 |Vk , zk , uk +1 )p (Vk |zk , uk +1 )dvk = p (zk +1 |Vk +1 , Mk +1 ) p (Vk +1 |Vk , uk )p (Vk , Mk |zk , uk )dvk No closed form solution Approximate the solution using an Extended Kalman Filter
DefenceR&DCanadaR&DpourladfenseCanada
p (Vk +1 , Mk +1 |zk +1 , uk +1 ) = p (zk +1 |Vk +1 , Mk +1 )p (Vk +1 , Mk +1 |zk , uk +1 ) (Bayes Filter) = p (zk +1 |Vk +1 , Mk +1 ) p (Vk +1 , Mk +1 |Vk , zk , uk +1 )p (Vk |zk , uk +1 )dvk = p (zk +1 |Vk +1 , Mk +1 ) p (Vk +1 |Vk , uk )p(Vk , Mk |zk , uk )dvk Prior probability: Prior state and covariance estimates from the last lter iteration
DefenceR&DCanadaR&DpourladfenseCanada
p (Vk +1 , Mk +1 |zk +1 , uk +1 ) = p (zk +1 |Vk +1 , Mk +1 )p (Vk +1 , Mk +1 |zk , uk +1 ) (Bayes Filter) = p (zk +1 |Vk +1 , Mk +1 ) p (Vk +1 , Mk +1 |Vk , zk , uk +1 )p (Vk |zk , uk +1 )dvk = p (zk +1 |Vk +1 , Mk +1 ) p(Vk+1 |Vk , uk )p (Vk , Mk |zk , uk )dvk Motion Model: Probabilistic motion model estimates the new vehicle pose covariance estimates from the prior estimate and the control
DefenceR&DCanadaR&DpourladfenseCanada
p (Vk +1 , Mk +1 |zk +1 , uk +1 ) = p (zk +1 |Vk +1 , Mk +1 )p (Vk +1 , Mk +1 |zk , uk +1 ) (Bayes Filter) = p (zk +1 |Vk +1 , Mk +1 ) p (Vk +1 , Mk +1 |Vk , zk , uk +1 )p (Vk |zk , uk +1 )dvk = p(zk+1 |Vk+1 , Mk+1 ) p (Vk +1 |Vk , uk )p (Vk , Mk |zk , uk )dvk Measurement Model: Measurement model gives the expected value of the feature observations.
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
EKF SLAM
Simultaneous Localization and Mapping: Joint estimate both robot pose and position of unique landmarks Landmark estimates improve robot pose estimates and vice versa Use standard Extended Kalman Filtering techniques Assumes Gaussian noise and error. m x1 r ym xk 1 . r y Mk = . , V = k k .m r k xn m yn
DefenceR&DCanadaR&DpourladfenseCanada
Xk =
Vk Mk
, Pk =
Pv Pvm T Pvm Pm
(1)
Prediction - A prediction of the new state vector and covariance matrix is calculated from the previous state and covariance, and the new control uk .
DefenceR&DCanadaR&DpourladfenseCanada
Xk =
Vk Mk
, Pk =
Pv Pvm T Pvm Pm
(1)
Data Association - Find matches between the current landmarks Mk and the new set of observed features zk .
DefenceR&DCanadaR&DpourladfenseCanada
Xk =
Vk Mk
, Pk =
Pv Pvm T Pvm Pm
(1)
Measurement Update - Calculate the Kalman gain for each observed landmark and update the state and covariance values based on that Kalman gain and the measurement innovation.
DefenceR&DCanadaR&DpourladfenseCanada
Xk =
Vk Mk
, Pk =
Pv Pvm T Pvm Pm
(1)
Augmentation - Integrate newly observed landmarks into the state vector and covariance matrix.
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
Vk +1
DefenceR&DCanadaR&DpourladfenseCanada
k +1 P
G linearizes the state transition function R maps additional motion noise into the state space
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
What do we want in feature extraction algorithm? Stable features Outlier rejection Accuracy Speed
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
Extract features from Raw laser data Incremental algorithm (line tracking)
Fit a line to a set of points Compute the residual If below error threshold add another point and repeat If above the last point added is a feature
DefenceR&DCanadaR&DpourladfenseCanada
End for
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
Measurement Model: zjm = rjm y 2 + x 2 = m j arctan (y , x ) Mahalanobis Distance normalizes feature/landmark distance based on their covariances:
T 1 m dij = (zi zjm )T (Hj Pk +1 Hj + R ) (zi zj )
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
Points which lie on the same covariance ellipse have are equidistant
DefenceR&DCanadaR&DpourladfenseCanada
End for Innovation vector - Larger bigger correction Kalman Gain - Weights the innovation vector.
Smaller measurement error higher gain (trust the measurement) Smaller covariance DefenceR&DCanadaR&Dpourlad smaller gain (trust the fenseCanada prediction)
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
X X = xim yim
DefenceR&DCanadaR&DpourladfenseCanada
P=
Pv Pvm T Pvm Pm
DefenceR&DCanadaR&DpourladfenseCanada
Pv Pvm Pv T Fi T Pa = Pvm T Pm Pvm T Fi T T T Fi Pv Fi Pvm Fi Pv Fi + Mi RMi F and M are the Jacobians which linearize the new landmark equations with respect to vehicle pose and measurement variables respectively
DefenceR&DCanadaR&DpourladfenseCanada
Can use provisional landmark list to reject spurious features Pruning (removing old/irrelevant landmarks) Landmark Signatures (reectivity/color/shape/etc.) Landmark minimum distance Intelligent update (Only update relevant landmarks) Sub-Mapping (more on this later)
DefenceR&DCanadaR&DpourladfenseCanada
Straightforward application of the EKF Large body of research to pull from Works reasonably well for small number of features and distinct landmarks
DefenceR&DCanadaR&DpourladfenseCanada
Complexity Quadratic with number of features No guarantee of convergence in non-linear case Make hard decisions about data associations Cant correct for erroneous data associations Need suciently distinct landmarks (low clutter) Gaussian Assumption
DefenceR&DCanadaR&DpourladfenseCanada
Unscented KF SLAM
Linearization using Sigma points Preserves the true mean and covariance of the posterior
DefenceR&DCanadaR&DpourladfenseCanada
Data Association
Erroneous data association WILL cause SLAM to fail!
1 feature to 2 landmarks 2 features to 1 landmark clutter Symmetry Reduce the max Mahalanobis distance?
DefenceR&DCanadaR&DpourladfenseCanada
Data Association
How do determine the correspondence between a feature and landmark? Individual Compatibility Joint Compatibility Branch and Bound Combined Constrained Data Association Randomized Joint Compatibility Multi hypothesis Data Association
DefenceR&DCanadaR&DpourladfenseCanada
Calculate Mahalanobis distance from each feature to landmark Choose feature with smallest distance to landmark within threshold (validation gate) Disregard ambiguous associations Advantage: Simple Disadvantage: Can be error prone especially in clutter
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
JCBB with RANSAC Uses relative constraints Randomly select m of the n feature measurements Advantage: Reduced complexity, no pose necessary Disadvantage: Dont always know Pfail and Pgood
DefenceR&DCanadaR&DpourladfenseCanada
DefenceR&DCanadaR&DpourladfenseCanada
FastSLAM
Represent probability distributions by set of particles Rao-Blackwellized particle lter (low dimensional EKFs) Each particle maintains its own pose and map estimates (multi-hypothesis) Each landmark has its own EKF N landmarks and M particles we have MxN + 1 lters
DefenceR&DCanadaR&DpourladfenseCanada
FastSLAM Algorithm
Do M times: Retrieve pose from particle k Predict a new pose (motion model) Data Association Measurement Update Importance Weight END Re-sample with Replacement using the importance weight Raw Odometry FastSLAM
DefenceR&DCanadaR&DpourladfenseCanada
FastSLAM Advantages
Time logarithmic to number of landmarks Multi-hypothesis data association (robustness) No linearization of non-linear motion models Solves both full SLAM and Online SLAM problems
DefenceR&DCanadaR&DpourladfenseCanada
FastSLAM Disadvantages
Throws away correlations Over optimistic pose estimate Harder to close the loop
DefenceR&DCanadaR&DpourladfenseCanada
Sub-Mapping SLAM
Partition SLAM into sub-maps Optimally Fuse at a global level periodically Reduces EKF linearization errors Reduces computational complexity Closing the loop techniques for sub-maps
DefenceR&DCanadaR&DpourladfenseCanada
ATLAS Hybrid-SLAM Decoupled Stochastic Mapping Hierarchical SLAM Conditionally In dependant SLAM Divide and Conquer EKF SLAM vs. D&C SLAM
DefenceR&DCanadaR&DpourladfenseCanada
Other Topics
Visual SLAM
SIFT, Harris Corners, SURF, etc. for features No range to features Inverse Depth
Muti-robot SLAM
Teams map building Information sharing/cooperation
Resources
Probabilistic Robotics
Questions?
DefenceR&DCanadaR&DpourladfenseCanada