This website contains accompanying material to the IROS paper:
M. Levihn, L. P. Kaelbling, T. Lozano-Perez, M. Stilman. Foresight and Reconsideration in Hierarchical Planning and Execution. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013.
In the following videos the robot's goal was to be at a configuration outside the room. The robot is initially not aware of the chair(s) blocking its way. After executing sensing actions the robot detects the chair(s) and decided that the chair(s) have to be moved.
This video shows a scenario where the robot needs to move two chairs to exit the room. Additionally, the robot is explicitly modeling its uncertainty in motion and perception. To reduce positional uncertainty the robot frequently decides to look at landmark objects (permanent objects with known position). The actual execution, the robot's reasoning as well as sensory input are shown interleaved in this video.
This video demonstrates a similar setup as before, but with a single chair and without modeling the uncertainty.
Same setup as above but different run.
Similar as above but simpler setup.
The following videos show example runs for the domains reported in the paper. For each domain, one video for each setting is shown. The videos are generated from the mean runs for each setting and domain. Additionally, a simulation run of a very large domain can be found at the end.
This work was supported in part by the NSF under Grants IIS-1117325 and IIS-1017076. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We also gratefully acknowledge support from ONR grant N00014-12-1-0143 and ONR MURI grant N00014-09-1-1051, from AFOSR grant FA2386-10-1-4135 and from the Singapore Ministry of Education under a grant to the Singapore-MIT International Design Center.