Robert Gieselmann

I am a final year PhD student at KTH Royal Institute of Technology in Stockholm under the supervision of Florian T. Pokorny. My research is supported by WASP, the Wallenberg AI, Autonomous Systems and Software Program. Previously, I was a research scientist intern at Meta AI (FAIR) under the supervision of Mustafa Mukadam. Before my PhD studies, I worked as a research assistant within medical robotics at the Technical University of Hamburg (TUHH). I received my M.Sc. in Robotics, Cognition, Intelligence from the Technical University of Munich (TUM).

Email  /  LinkedIn  /  Google Scholar  /  Github

profile photo

Research

My goal is to teach robots how to solve complex reasoning tasks from data. In particular, I am interested in the interplay between classical methods in robotics, particularly sampling-based planning algorithms, and machine learning techniques from representation learning and reinforcement learning.

fast-texture Expansive Latent Planning for Sparse Reward Offline Reinforcement Learning
Robert Gieselmann, Florian T. Pokorny
Conference on Robot Learning (CORL) , 2023 (oral presentation 6.6%)
Previously RSS 2023 - Workshop on Learning for Task and Motion Planning (spotlight)
[Paper]

A model-based reinforcement learning agent which merges concepts from unsupervised contrastive learning and sampling-based robot motion planning to solve challenging long-horizon tasks from visual input data. In particular, we formulate decision-making under sparse reward feedback as heuristic tree search in a continuous latent state space.

prl Latent Planning via Expansive Space Trees
Robert Gieselmann, Florian T. Pokorny
Neural Information Processing Systems (NeurIPS) , 2022 (acceptance rate 25.6%)
[Paper]

We tackle the problem of long-horizon goal-reaching from visual input data by sampling-based planning in a learned embedding space. Our method employs tools from contrastive representation learning and density estimation to implement a sampling-based planner that explores solution paths within the estimated area of the latent data support.

blind-date DLO@Scale - A Large-Scale Meta Dataset for Learning Non-Rigid Object Pushing Dynamics
Robert Gieselmann, Alberta Longhini, Alfredo Reichlin, Danica Kragic, Florian T. Pokorny
[Paper][Website]
Workshop on Physical Reasoning and Inductive Biases for the Real World, NeurIPS , 2021

We present a large-scale dataset for meta and mutli-task learning in the setting of prediction for physical object interaction. More specifically, we focuse on deformable object pushing tasks and feature different material properties such as complicated mechanical phenonema such as elastoplastic deformations.

clean-usnob Planning-Augmented Hierarchical Reinforcement Learning
Robert Gieselmann, Florian T. Pokorny
IEEE Robotics and Automation Letters (RA-L), 2021
[Paper]

We robustify the learned policies of a hierarchical reinforcement learning agent using a graph-based planning framework. We find that the lower-level value functions used for graph construction provide more accurate distance estimates compared to the ones obtained through conventional flat RL.

clean-usnob ReForm: A Robot Learning Sandbox for Deformable Linear Object Manipulation
Rita Laezza*, Robert Gieselmann*, Florian T. Pokorny, Yiannis Karayiannidis
IEEE International Conference on Robotics and Automation (ICRA) , 2021
[Paper]

A collection of benchmark simulation environments for testing and developing learning algorithms for deformable object manipulation. Our benchmark features different types of tasks and physical object properties such as elasticity and plasticity.

clean-usnob Standard Deep Generative Models for Density Estimation in Configuration Spaces: A Study of Benefits, Limits and Challenges
Robert Gieselmann, Florian T. Pokorny
IEEE International Conference on Intelligent Robots and Systems (IROS) , 2020
[Paper]

A comparison between Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) for the problem of learning generative models of the state distributions resulting from shortest path trajectories in robot configuration spaces.

clean-usnob Experience-Based Heuristic Search: Robust Motion Planning with Deep Q-Learning
Julian Bernhard, Robert Gieselmann, Klemens Esterle, Alois Knoll
IEEE International Conference on Intelligent Transportation Systems (ITSC) , 2018
[Paper]

A method that improves the sample efficiency of non-holonomic mobile robot path planning by introducing learned heuristics obtained from value-based reinforcemet learning. We leveraged estimated state-action values to guide the node expansion within the Hybrid A* algorithm towards relevant goal states.


Website based on template from John Barron.