Earlier this week we performed a set of experiments to test localization functions. Localization simply answers the question “Where am I?” to the rover. This is essential to achieve as simple goals as going around a rock or localizing a water ice source.
Every robot needs to know how much it moved and where it is in some sense. Here on Earth the task is usually easier as we have great infrastructure to provide precise global location to anyone. Yes, hardly noone never used GPS before.
No GPS on the Moon
Unfortunately we don’t have a Global Positioning System on the Moon. At least yet. Therefore the rover needs to use the information available locally. First, it can estimate how much it moved from how much the wheels actually turned — this is called wheel odometry.
But, there is a problem because a wheel can slip when on high slopes or in deep soil which can introduce significant error. That is why we also use vision. We can track visual cues in the camera images and measure how much they moved with respect to the robot. This is actually what you are using every day to just walk around– in robotics we call that visual odometry.
But it is not so simple. To know the distances in images, you either have to guess distances in the images (monocular visual odometry) or use a pair of cameras looking at the Moon from different position on the robot (stereo visual odometry). At each step you match the visual cues from the previous step and measure the distance difference.
It is not enough to know how much it moved but also in which direction the rover moved. Therefore it is necessary to measure the attitude of the rover. This is done using two types of sensors: accelerometers and gyros. Gyros measure angular rates and are based on different principles (lets not get there). Accelerometers measure the acceleration of the robot; they can give the information about the gravitational vector (center of the Moon), i.e. precise information about the roll and pitch of the rover. But. The measurements are disturbed by the movement of the robot because it accelerates (not so much) and vibrates (a lot). Thus it is necessary to use both information and cleverly fuse the data.
In fact, this is exactly the set of sensors in your smartphone bundled in an Inertial Measurement Unit (IMU).
Roll and pitch are very important information for the driver because it is impossible to see from the images how much the rover is inclined and if it is in dangerous position.
Since there are many sensors and sources of the position information we need to fuse the data. Field of control systems provides us with very useful algorithm called Kalman filter. It is essential for many tasks besides localization. Basically, it runs a model of dynamics of the robot. And based on the model it predicts the next state. Then using the measurements the algorithm corrects the prediction. Like this it can estimate the uncertainty between the prediction and actual position.
The ground truth
In fact, if you look at the images of the rover, we do have GPS. It is not that we are cheating but we need to evaluate the measured data against what was the real position of the rover (the ground truth) which is provided by the GPS RTK. It is slightly fancier version of GPS you are using every day. It uses two antennas, one static with known position and one mounted to the rover to compensate the errors introduced in the signal from satellites when it travels through the atmosphere.
So, what we actually wanted to check with these tests?
There were many questions to answer in the field test:
- Does the rover slips a lot on the terrain?
- Are there enough of visual cues to provide good visual odometry?
- Are the shadows from the lights on the rover problem for the visual odometry?
- How fast we have to process the images to get good localization?
- Are the inertial sensors precise enough?
- Is the low angle of the light from Sun problem for the cameras?
The tests we performed should give good answers for these questions.