Visibility-Aware Motion Planning

Project Description

We are interested in investigating practical formulations of and solutions to what we are calling “visibility-aware motion planning”. In a traditional motion planning setting, a robot must plan a sequence of robot motions that avoid collision with known environment. In visibility-aware motion planning, the environment can contain un-modeled obstacles, and the robot must plan a sequence of moves and a sequence of “look” actions. The look actions ensure that if there are any unexpected obstacles, they will be seen before the robot causes a collision. At that moment, execution framework on the robot knows that the plan is invalid (since an unexpected obstacle was encountered). It adds the obstacle to the map, and the robot re-plans.

This problem arises from running navigation algorithms on actual robots. We have some information about the environment the robot is in (we know, for example, the floorplan). But we still need to move safely with respect to unknown and unexpected obstacles.

In this illustration, a robot with a wide field of view, must enter the hallway camera-first, back out, re-orient, and back in. This is an example of the problem in a 2D world. The robot starts at the bottom left, and the region it can see is shown in yellow. The goal robot position is the dotted line.

This project involves the following:

The UROP could involve any combination of those things, and others, depending on your interest and needs of the project.

UROP Application

To apply for the position, we would like it for you to complete a project that exercises some of the pieces of this research endeavor. Feel free to use any libraries or existing code to complete the project. If you use existing code, please attribute it. The intention is that this project should not take more than 15 hours of your time, however some aspects may be very involved, which means that

We would definitely like to see you try steps 1 and 2 and at least attempt or think about steps 3 and 4.

Project

Step 1 – Modeling and Visualization

Model the domain in the figure, with the gray walls and robot shape. The world is in 2D and the robot configuration space is in 3D (x, y, theta). For example, implement a collision test function that given an (x,y,theta) robot configuration, robot geometry, and environment geometry, answers whether the configuration is in collision. (One way to do this is to first implement a line-segment intersection test function. Two polygons intersect if there is any edge of one that intersects an edge of the other. You shouldn’t need to worry about one polygon entirely containing the other, but do feel free to handle that case.)

It will also be useful to visualize the environment and the robot position.

Step 2 – Traditional Motion Planning

Implement a motion planning algorithm computes a path from start to finish, in a 2D world with a 3D configuration space (x, y, theta) while avoiding obstacles. Use a grid in x, y, theta space and perform an A* graph search, with a heuristic, on this grid. It’s okay for the the grid to be not very fine. It’s also okay for the grid to only consider motions in x, y, or theta (each configuration has 6 neighbor configurations) If you would like some concrete numbers, in the figures above, the robot is 1m-by-1m, and the discretization in x and y is 0.125m. Discretize theta into 8-20 headings. Demonstrate that the robot finds a path that doesn’t collide with the environment.

Step 3 – Visibility Goal

In Step 2, the goal test is based on the robot configuration. In this step, there will be some 2D point P that the robot must see. (One way to do this is to shoot a line segment from the camera on the robot to the point. If the angle of the line segment is within the field of view of the camera, and the length is below some threshold, and the line segment doesn’t intersect the environment, then a camera would see the point). The point of this step is to model this goal test.

In this illustration, the planner has found a path that achieves visibility of the red points. To be clear, this is just an example of what you need to do, but you don’t need to achieve visibility of all the points at once, or use this particular arrangement of walls.

Step 4 – Visibility Heuristic

Perform a non-heuristic search with this goal test (the edge test is still the same–the robot can’t collide with walls). Now let’s think of some heuristic that will hopefully improve the search time, by guiding it toward the goal. Feel free to get creative with the heuristic. To get you started with a concrete idea: compute a wall-aware distance field F(x, y) from P (the point to be seen) and every other point in the space using a flood fill algorithm (hopefully you can re-use any A* code here). This means that for every point (on the grid) in the workspace, you know the length of the shortest path (on a grid) between that point and P. Now for a given robot configuration, there is a set of grid points that are visible V. Our first heuristic will take the minimum of F(x,y) with (x,y) in V. Configurations that can see the point will have 0 as the minimum, and configurations that are out of the field of view will have higher and higher values. Now that we have this heuristic, perform the search using this heuristic. For the path that is returned by the search, plot the heuristic values along the path. An ideal heuristic will continuously decrease along the path, but this may not always be possible to achieve. What does this heuristic look like? We can discuss some ideas for improving it.

The following figure illustrates the distance field idea, using the same illustration used in Step 3. The white points (the red points on the last figure) are points set to distance = 0. The heatmap and contour lines show the value of the distance field, which increases outward from the white points.