ICAPS 2026 Summer School

University College Dublin • June 22–25

Dates & Location

University College Dublin (UCD)
June 22–25, 2026

Registration, Board & Accommodation

Schedule

Time Monday
June 22
Tuesday
June 23
Wednesday
June 24
Thursday
June 25

Speakers & Topics

Gabriele Roeger
(University of Basel)

Tutorial on Classical Planning

TBD

Michael Katz
(IBM Research)

Tutorial on GenAI in Planning

TBD

Nir Lipovetsky
(University of Melbourne)

Lab on PDDL Classical Planning with planning.domains

TBD

Blai Bonet
(Universidad Simón Bolívar, Universitat Pompeu Fabra)

Tutorial on Learning for Planning

TBD

Florent Teichteil-Königsbuch
(Airbus)

Tutorial on Probabilistic Planning

TBD

Ayal Taitler
(Ben-Gurion University of the Negev)

Lab on RDDL Probabilistic Planning with PyRDDLGym

TBD

Chris Beck
(University of Toronto)

Tutorial on Scheduling (or Putting the S in ICAPS)

Scheduling is about deciding when and with what resources set of tasks should be performed. Unlike planning, we typically know what tasks are to be performed, or perhaps must select them from some options, or the tasks are given to us over time in an online setting. However, we do not typically have to decide what actions to perform. Scheduling is widely studied in Operations Research (OR), theoretical computer science, computer systems, and queueing theory, as well as in AI, leading to a very large variety of problems and solution approaches. In this tutorial, I will provide a brief overview of this scope and then focus on exact techniques for problems studied in OR and AI: mixed-integer linear programming, constraint programming, and recent work on using heuristic search to solve scheduling problems within the domain-independent dynamic programming framework.

Matthew Taylor
(University of Alberta)

Tutorial on Reinforcement Learning

You’re a new robot, just coming online. How do you move your body? What’s surrounding you? What are you supposed to do? Come to this session to find out!

In reinforcement learning (RL) a physical or virtual agent needs to learn about its surroundings while also figuring out how to maximize a reward. These sessions will help you understand
1) how to identify when RL could be the right framing for a problem,
2) what makes an RL problem more or less difficult to solve,
3) how a handful of simple RL algorithms work, and
4) where to go if you’d like to learn more.

Calarina Musliman
(University of Alberta)

Lab on Reinforcement Learning

TBD

Tom Silver
(Princeton University)

Tutorial and Lab on Task and Motion Planning

Automated planning is especially challenging when state and action spaces are continuous, time horizons are long, environments are constrained, and feedback is sparse---all of which are common in robotics. Task and motion planning (TAMP) addresses these challenges by enabling the agent to reason jointly about “what to do” (task planning) and “how to do it” (motion planning). These two levels of reasoning are often entangled due to the lossy nature of the task-planning abstractions that translate the continuous robot environments into discrete representations. In this tutorial and lab, we will approach these challenges with a first-principles introduction to TAMP. Participants should come away with new intuitions and practical tools---an understanding of both “what to do” and “how to do it” when decision-making calls for TAMP.

Naman Shah
(Brown University, Ai2)

Tutorial and Lab on Task and Motion Planning

Automated planning is especially challenging when state and action spaces are continuous, time horizons are long, environments are constrained, and feedback is sparse---all of which are common in robotics. Task and motion planning (TAMP) addresses these challenges by enabling the agent to reason jointly about “what to do” (task planning) and “how to do it” (motion planning). These two levels of reasoning are often entangled due to the lossy nature of the task-planning abstractions that translate the continuous robot environments into discrete representations. In this tutorial and lab, we will approach these challenges with a first-principles introduction to TAMP. Participants should come away with new intuitions and practical tools---an understanding of both “what to do” and “how to do it” when decision-making calls for TAMP.

Amy Zhang
(UT Austin)

Tutorial on Planning and Learning in Robotics

TBD

Organizers

Siddharth Srivastava
(Arizona State University)

Scott Sanner
(University of Toronto)