Naman Shah

Naman Shah

Post Doctoral Research Fellow

Brown University


Naman has completed his PhD from Arizona State University, Tempe working at Autonomous Agent and Intelligent Robots (AAIR) lab directed by Dr. Siddharth Srivastava.

His research interest includes learning and using abstractions for sequential decision-making problems for robotics. He aims to learn hierarchical abstractions for robot planning tasks and use them to solve different problems such as hierarchical planning, reinforcement learning, and mobile manipulation in stochastic settings.



  • Artificial Intelligence
  • Robotics
  • Learning Abstractions
  • Task and Motion Planning
  • Reinforcement Learning
  • Hierarchical Planning


  • Ph.D. in Computer Science, 2019 - 2024

    Arizona State University

  • M.S. in Computer Science, 2017 - 2019

    Arizona State University

  • B.Eng. in Computer Engineering, 2013 - 2017

    Gujarat Technological University



Research Scientist Intern

Toyota Research Center of North America

May 2023 – Aug 2023 Ann Arbor, Michigan
Designed and developed an approach for hierarchical planning for a fleet of hopital robots.

Applied Scientist Intern

Amazon Robotics

May 2022 – Aug 2022 North Reading, Massachusetts
Designed and developed an approach for explicit multi-agent coordination under uncertainty for a fleet of autonomous robots.

Research Intern

Palo Alto Research Center

May 2019 – Aug 2019 Palo Alto, California
Focused on using Qulitative Spatial Relations (QSRs) to autonomsly identify structures from the visual inputs and compute task plans to build those structures using physical robots.

Research Assistant


May 2018 – Present Arizona
Performing research on core AI concepts like sequential decision making under uncertainity using abstractions under the guidance of Dr. Siddharth Srivastava.

Teaching Assistant

Arizona State University

Jan 2016 – Dec 2016 Arizona

Assisted Dr. Siddarth Srivastava for a grauate level Aritificial Intelligene course (CSE 571).

Responsibilites include:

  • Developing projects.
  • Creating and evaluating homeworks.
  • Holding office hours to help students with the course material.


From Reals to Logic and Back: Inventing Symbolic Vocabularies, Actions and Models for Planning from Raw Data

Traditional robot planning relies on human-crafted logic representations, but this paper introduces a method to autonomously learn abstract representations from raw robot data. Results show these learned models enable scalable planning for complex tasks without human intervention.

Hierarchical Planning and Learning for Robots in Stochastic Settings Using Zero-Shot Option Invention

This paper proposes a new method for robots to plan actions in complex environments, even when the environment is unknown. The robot learns to create its own high-level actions without needing pre-programmed ones. This allows the robot to quickly solve new problems in unseen environments. The method is shown to be faster and achieve significantly better solutions than existing approaches

Using Deep Learning to Bootstrap Abstractions for Robot Planning

In this paper, we use deep learning to identify critical regions and automatically construct hierarchical state and action abstractions. We use these hierarchical abstractions with a multi-source mutli-directional hierarchical planner to compute solutions for robot planning problem.

Learning and Using abstractions for Robot Planning

In this paper, we propose unified framework based on deep learning that learns sound abstractiosn for complex robot planning problems and uses it to efficiently perform hierarchical planning.

Anytime Task and Motion Policies for Stochastic Envrionments

In this paper, we provide and efficient abstraction based methods to compute task and motion policies for complex robotics task for stochastic environments.

Recent & Upcoming Talks

Learning and Using Abstractions for Robot Planning

The talk was given at PlanRob 2021. It talks about the framework we developed to learn and use abstractions hierarchies for efficient robot planning.

Anytime Task and Motion Policies for Stochastic Environments

In this talk, I have presented my paper of abstraction and hierarchical refinement based combined task and motion planning approach at ICRA 2020.