Tentative schedule
Welcome
Talk
AI-enabled Robotics: Towards Real-World Applications
Ajinkya Jain (Google Intrinsic)
While Foundation models and VLMs show promise for dexterous robot manipulation, their real-world applications remain limited, with a significant gap between research prototypes and practical demands. This talk explores how to bridge this divide and achieve Technology Readiness Level (TRL) 7 and above for AI-powered robots. We argue this is best achieved by combining these methods with the right hardware and infrastructure tools and focusing on building Artificial Specialized Intelligence (ASI) for specific manipulation domains first. ASI offers key advantages: reduced data dependency, rapid training, and real-time control capabilities suitable for robotics. With the right tools and a suite of ASIs, we can construct robust and versatile behavior generation models that are not only data-efficient and high-performing, but also interpretable and reliable in real-world conditions.
Talk
Efficient Robot Learning and Exploration
Rika Antonova (University of Cambridge)
In this talk, I will outline ingredients for enabling efficient robot learning. First, I will demonstrate how large vision-language models can enhance scene understanding and generalization, allowing robots to learn general rules from specific examples for handling everyday objects. Then, I will describe a policy learning method that leverages equivariance to significantly reduce the amount of training data needed for learning from human demonstrations. Moving beyond learning from demonstrations, we will explore how simulation can enable robots to learn autonomously. I will describe the challenges and opportunities of bringing differentiable simulators closer to reality, and contrast direct controller optimization in such adaptive simulators with reinforcement learning in 'black-box' simulators. To further expand robot capabilities, we will consider adapting hardware. In particular, I will demonstrate how differentiable simulation can be used for learning tool morphology to automatically adapt tools for robots. Finally, I will outline a vision of how new affordable and robust sensors can aid in learning and control, how rapid prototyping can enable effective design iterations, and how scaling up exploration would let us tackle the vast design space of optimizing sensing, morphology, actuation, and policy learning jointly. I will conclude with examples of interdisciplinary collaborations where hardware, control, learning, and vision researchers jointly build solutions greater than the sum of their parts.
Lunch Break
Talk
Controllability: The Universal Language of Sequential Decision-Making
Caleb Chuck (University of Texas at Austin)
Controllability is a fundamental concept in sequential decision-making that transcends specific applications and unifies diverse domains, from robotics to artificial intelligence. This concept is essential in reinforcement learning and control theory, as it underpins the agent’s ability to learn, adapt, and optimize decisions within complex, dynamic environments.
Final Remarks
Speakers

Ajinkya Jain
Ajinkya Jain is a senior robotics researcher at Intrinsic, an Alphabet company. His research focuses on developing and applying robot learning methods for dexterous robot manipulation. Before joining Intrinsic, he received his Ph.D. in Robotics from the University of Texas at Austin, where he focused on algorithms for learning object interaction models from visual data and robust motion planning strategies for manipulating them under uncertainty.

Rika Antonova
Rika Antonova is an Associate Professor at the University of Cambridge. Her research interests include data-efficient reinforcement learning algorithms, active learning & exploration, and robotics. Earlier, Rika was a postdoctoral scholar at Stanford University upon receiving the NSF/CRA Computing Innovation Fellowship from the US National Science Foundation. Rika completed her PhD at KTH, Stockholm in the division of "Robotics, Perception, and Learning". Earlier, she obtained a research Master's degree from the Robotics Institute at Carnegie Mellon University. Before that, Rika was a senior software engineer at Google in the Search Personalization team and then in the Character Recognition team (developing open-source OCR engine Tesseract).

Caleb Chuck
Caleb Chuck recently defended his PhD thesis at Computer Science Department of the University of Texas at Austin. He is part of the Personal Autonomous Robotics Lab (PeARL) with Professor Scott Niekum. His research focuses on better understanding how robots can complement humans. He develops hierarchical object-centric methods to improve robotic manipulation.