Naman Shah

Naman Shah

PhD Student

Arizona State Univeristy


I am a 4th year PhD student working in Autonomous Agent and Intelligent Robots (AAIR) lab directed by Dr. Siddharth Srivastava at Arizona State University, Tempe, USA.

My research interest includes learning and using abstractions for sequential decision-making problems for robotics. I aim to learn hierarchical abstractions for robot planning tasks and use them to solve different problems such as hierarchical planning, reinforcement learning, and mobile manipulation in stochastic settings.



  • Artificial Intelligence
  • Robotics
  • Learning Abstractions
  • Task and Motion Planning
  • Reinforcement Learning
  • Hierarchical Planning


  • Ph.D. in Computer Science, 2019 - Present

    Arizona State University

  • M.S. in Computer Science, 2017 - 2019

    Arizona State University

  • B.Eng. in Computer Engineering, 2013 - 2017

    Gujarat Technological University



Applied Scientist Intern

Amazon Robotics

May 2022 – Aug 2022 North Reading, Massachusetts
Designed and developed an approach for explicit multi-agent coordination under uncertainty for a fleet of autonomous robots.

Research Intern

Palo Alto Research Center

May 2019 – Aug 2019 Palo Alto, California
Focused on using Qulitative Spatial Relations (QSRs) to autonomsly identify structures from the visual inputs and compute task plans to build those structures using physical robots.

Research Assistant


May 2018 – Present Arizona
Performing research on core AI concepts like sequential decision making under uncertainity using abstractions under the guidance of Dr. Siddharth Srivastava.

Teaching Assistant

Arizona State University

Jan 2016 – Dec 2016 Arizona

Assisted Dr. Siddarth Srivastava for a grauate level Aritificial Intelligene course (CSE 571).

Responsibilites include:

  • Developing projects.
  • Creating and evaluating homeworks.
  • Holding office hours to help students with the course material.


Using Deep Learning to Bootstrap Abstractions for Robot Planning

In this paper, we use deep learning to identify critical regions and automatically construct hierarchical state and action abstractions. We use these hierarchical abstractions with a multi-source mutli-directional hierarchical planner to compute solutions for robot planning problem.

Learning and Using abstractions for Robot Planning

In this paper, we propose unified framework based on deep learning that learns sound abstractiosn for complex robot planning problems and uses it to efficiently perform hierarchical planning.

Anytime Task and Motion Policies for Stochastic Envrionments

In this paper, we provide and efficient abstraction based methods to compute task and motion policies for complex robotics task for stochastic environments.

Recent & Upcoming Talks

Learning and Using Abstractions for Robot Planning

The talk was given at PlanRob 2021. It talks about the framework we developed to learn and use abstractions hierarchies for efficient robot planning.

Anytime Task and Motion Policies for Stochastic Environments

In this talk, I have presented my paper of abstraction and hierarchical refinement based combined task and motion planning approach at ICRA 2020.