We are interested in investigating practical formulations of and solutions to what we are calling "visibility-aware motion planning". In a traditional motion planning setting, a robot must plan a sequence of robot motions that avoid collision with known environment. In visibility-aware motion planning, the environment can contain un-modeled obstacles, and the robot must plan a sequence of moves and a sequence of "look" actions. The look actions ensure that if there are any unexpected obstacles, they will be seen before the robot causes a collision. At that moment, execution framework on the robot knows that the plan is invalid (since an unexpected obstacle was encountered). It adds the obstacle to the map, and the robot re-plans.
This problem arises from running navigation algorithms on actual robots. We have some information about the environment the robot is in (we know, for example, the floorplan). But we still need to move safely with respect to unknown and unexpected obstacles.