Navigation

Mapping of the rover’s environment and simultanous location is beeing provided by the navigation stack of ROS which is well integrated with a commercial Kinect sensor. This sensor is a widely used sensor among robotics projects and generates point cloud data from the environment. The navigation nodes then use this data to calculate the position of the sensor within the surroundings by correlating geometric features from the map data with those in the point cloud. While the original map generated from the point cloud is a 3D map, the path planning algorithm provided by ROS requires a 2D pixelmap (figure 11). The 3D data is used and converted into the required format by representing regions with slopes too steep to be handled by SEAR as obstacles. The path planning algorithm first tries to find a way through obstacles without taking details into account. This is called the global path. In a next step the local planner takes care of the details like steering and turning radius. For this purpose a detailed model of the rover provides the necessary parameters.

2D pixelmap of the global cost map calculated from the point cloud data of the Kinect sensor (Source: TU Berlin)

2D pixelmap of the global cost map calculated from the point cloud data of the Kinect sensor (Source: TU Berlin)

Test set-up for detection of acclivities and point cloud results (Source: TU Berlin)

Test set-up for detection of acclivities and point cloud results (Source: TU Berlin)