Learning and Intelligent Systems Group
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
32 Vassar Street, Cambridge MA 02139


Home | People | Research | Opportunities | Publications | Alumni

Opportunities for MIT UROPs, SuperUROPs and MEng(unfortunately it's likely we will not be able to offer RAs for MEng). Please note that you have to be an MIT student to apply.

If you are interested please send an email to the corresponding student.

Integrating deep learning and algorithmic planning

We aim to integrate model-free deep learning and model-based algorithmic reasoning for robust robot decision-making under uncertainty. The key to our approach is a unified neural network representation of the robot policy. This policy representation encodes both a learned system model and an algorithm that solves the model, in a single, differentiable neural network. For details please refer to our previous work on QMDP-nets and Particle Filter nets.

In this project you would investigate important research questions related to the combined learning- planning approach: 1) how do we transfer learned models from a simulator to a real robot? 2) which algorithms can be encoded in a neural network? 3) what robotic tasks can we solve by integrating learning and planning?

Through the course of the project you would explore one or two of these questions. You would be doing experimental work using state-of-the-art deep learning techniques, and potentially a real robot system.

Student: Peter Karkus, karkus (at) mit (dot) edu

If you are interested in integrating learning and planning please send me an email with your CV and interest.

Learning to reason for task-and-motion planning problems

The last few decades have seen a great progress in low-level motion planning and control problems where the robot has to reason about physics and geometry of the environment to perform a particular sensorimotor skill, such as walking, jumping, or grasping an object. Based on this progress, we are interested in task and motion planning (TAMP) problems, where the robot has to reason about how to put these low-level skills together to achieve a high-level objective, such as constructing a building or packing boxes. This is an extremely complex search problem: it involves choosing a long sequence of both discrete and continuous decision variables. Unfortunately, the planners for these problems do not have the ability to improve their efficiencies based on their experience. To this end, our objective is to use learning to make planning more efficient.

Student: Beomjoon Kim, beomjoon (at) mit (dot) edu

Please look here for specific project descriptions.

Modular meta-learning

Meta-learning, or learning to learn, aims at generalizing with small amounts of data. Instead of having a single dataset we have many datasets and, instead of learning a classifier, we learn a learning algorithm. Then, when we receive another dataset we apply the learned algorithm and generalize with very few samples.

Most people have attempted this problem by finetuning big monolithic Neural Networks. Instead, we have a dictionary of small Neural Network modules and generalize by composing them in different ways. This allows us to find hidden structure in our datasets and generalize in broader ways. Feel free to take a look at our paper. We are hoping to progress this framework further with connections to:

Model-based curiosity

Neural Relational Inference

Multitask Reinforcement Learning

Machine Theory of Mind

Program Induction

Bayesian Drop-out

among others.

Student: Ferran Alet, alet (at) mit (dot) edu

If you are interested in meta-learning and/or modular neural networks send me an email with your CV and interests. I've already accepted two students so I'm only able to accept another in case of being a good match.

Learning for human-in-the-loop planning in robotics

It is well-understood that in order for robots to function in the real world, they must be able to execute tasks over long time periods while in the presence of or under the supervision of a human. Even in a static environment, a robot should be able to plan out how it will act in the world, but also be willing to change its plans based on events such as intervention or information by the human. We would like to build a system that learns from data/experience how to plan efficiently and resourcefully, given that a human is in some way involved in the environment or task.

Student: Rohan Chitnis, ronuchit (at) mit (dot) edu

For more information, concrete project suggestions, and exercises to determine if you'd be a good match, please refer to this PDF. Send me an email if you're interested, or if you have your own project idea you want to pursue together!

Visibility-Aware Motion Planning

We are interested in investigating practical formulations of and solutions to what we are calling "visibility-aware motion planning". In a traditional motion planning setting, a robot must plan a sequence of robot motions that avoid collision with known environment. In visibility-aware motion planning, the environment can contain un-modeled obstacles, and the robot must plan a sequence of moves and a sequence of "look" actions. The look actions ensure that if there are any unexpected obstacles, they will be seen before the robot causes a collision. At that moment, execution framework on the robot knows that the plan is invalid (since an unexpected obstacle was encountered). It adds the obstacle to the map, and the robot re-plans.

This problem arises from running navigation algorithms on actual robots. We have some information about the environment the robot is in (we know, for example, the floorplan). But we still need to move safely with respect to unknown and unexpected obstacles.

Student: Gustavo Goretkin, goretkin (at) csail (dot) mit (dot) edu

Take a look here to see if you are interested in this project and to learn about how to apply for this position