Planning in 2D Open Configuration Spaces with Limited Sensor Data
Language: Python

In search and rescue robotics applications, on board sensors can consume a lot of power from a limited energy source. So while ideally a sensor can be used at all times to obtain a constant and complete stream of information, it is often not practical to do so. With this project, we explored the implementation of a planner for a robot with a sensor that is not always active. This was done in simulation.
​
The objective is to rescue all the targets in an unknown map. We do not know their locations nor how many targets there are. Therefore, we must build a map using limited sensor calls such that we attain a certainty that there are no targets anywhere else in the map.
We assume a floorplan of the building from before the disaster is known. From this previous building knowledge, we extract regions of interest (ROIs). These ROIs may serve as an educated guess to help identify where survivors are more likely to have been located when the disaster hit. We also assume that we know the size of the map.
​
The environment map is stored as two matrices. The first matrix represents whether each grid cell is presumed to be an obstacle, freespace, or a target. The second matrix represents the uncertainty of each assignment in matrix 1.
​
We start the planner by applying the Traveling Salesman Problem (TSP) to find the order in which would be most efficient to traverse the ROIs. We then use a local planner to route the robot to each ROI. Once within each ROI, the robot uses frontier based mapping to reduce the uncertainty of all cells below a certain threshold. The uncertainty value of the grid cells surrounding the robot, is also what determines what triggers the sensor. If a target is found within the ROI, the robot uses the local planner to route to the target.
​
Throughout the planning process, since our constructed map still has uncertainty, calling A* will create paths that go through obstacles that are not yet known. To address this problem, the local planner that we implemented is a combination of the A* algorithm and the Bug 0 algorithm. Our robot follows the plan found by A*, until an obstacle is encountered. When it ultimately tries and fails to make a step into the obstacle with A*, the robot maps this cell as a known obstacle, updating the map that A* calls. It then uses Bug 0 to make one step around the obstacle before calling A* again. Using Bug 0 allows the robot to gain more information about the environment without having to use a sensor call.
​
A flow chart of our algorithm can be found below.
​
​
​
​

Below is an example of the results of our planner. In the ground truth map, the black areas are traversable, the red areas are obstacles, and the green cells are targets.
​
The first image is the ground truth map. The middle image is the final environment that the robot has mapped. The last image is the path the robot traversed. We can see that, despite the robot not traversing every grid square, the final map that it has stored is very similar to the ground truth map.



A video of the robot traversing two different maps can be seen at the top of this section. The video is shown at 2x speed. Note, that planning for entire path is done before visualization.
​
This project was completed in a team of 3.