Learning Pneumatic Non-Prehensile Manipulation
with a Mobile Blower

We investigate pneumatic non-prehensile manipulation (i.e., blowing) as a means of efficiently moving scattered objects into a target receptacle. Due to the chaotic nature of aerodynamic forces, a blowing controller must (i) continually adapt to unexpected changes from its actions, (ii) maintain fine-grained control, since the slightest misstep can result in large unintended consequences (e.g., scatter objects already in a pile), and (iii) infer long-range plans (e.g., move the robot to strategic blowing locations). We tackle these challenges in the context of deep reinforcement learning, introducing a multi-frequency version of the spatial action maps framework. This allows for efficient learning of vision-based policies that effectively combine high-level planning and low-level closed-loop control for dynamic mobile manipulation. Experiments show that our system learns efficient behaviors for the task, demonstrating in particular that blowing achieves better downstream performance than pushing, and that our policies improve performance over baselines. Moreover, we show that our system naturally encourages emergent specialization between the different subpolicies spanning low-level fine-grained control and high-level planning. On a real mobile robot equipped with a miniature air blower, we show that our simulation-trained policies transfer well to a real environment and can generalize to novel objects.



Paper

IEEE Robotics and Automation Letters (RA-L), 2022
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022
Latest version (June 30, 2022): arXiv:2204.02390 [cs.RO] or here.


Team

1 Princeton University             2 Google             3 Columbia University

Code

Code is available on GitHub, including:
  • Simulation environments
  • Training code
  • Pretrained policies

BibTeX

@article{wu2022learning,
  title = {Learning Pneumatic Non-Prehensile Manipulation with a Mobile Blower},
  author = {Wu, Jimmy and Sun, Xingyuan and Zeng, Andy and Song, Shuran and Rusinkiewicz, Szymon and Funkhouser, Thomas},
  journal = {IEEE Robotics and Automation Letters},
  year = {2022}
}

Technical Summary Video (with audio)


Qualitative Results

Blowing vs. pushing

When manipulating small scattered objects, we find that blowing is significantly more efficient than pushing. Here we show a comparison in the SmallEmpty simulation environment, where we find that the blowing robot is able to clean up objects roughly twice as quickly as the pushing robot.

Pushing
Blowing

We perform the same comparison in a real-world replica of the SmallEmpty environment and again find that blowing is significantly more efficient than pushing.

Pushing
Blowing

Generalization

We investigate how well our trained blowing policy can generalize to novel objects. Here we test our policy, which was trained in simulation with 10 mm spherical objects, on larger spherical objects of sizes 14 mm and 19 mm. We find that our policy, which was trained only in simulation, is able to generalize to these novel objects without any fine-tuning.

10 mm spheres
14 mm spheres
19 mm spheres

We also test our blowing policy on loose maple leaves with irregular, non-uniform shapes. These leaves weigh less than the spherical objects and exhibit more unpredictable dynamics due to air resistance. In spite of these differences, we find that our policy is still able to generalize well without any fine-tuning.

22 mm leaves
22 mm leaves (dense)

In the simulation environments, we find that our policies can also generalize to heterogeneous objects of mixed sizes and shapes, even though they were only trained with homogeneous spherical objects.

Objects of mixed sizes
Objects of mixed shapes

Real Robot Videos

In this section, we show full episodes (8x speed) of trained policies running on the real robot.

Blowing vs. pushing

Pushing
Blowing

Generalization

14 mm spheres
19 mm spheres
22 mm leaves

Simulation Videos

In this section, we show full episodes (4x speed) of our trained policies running in the simulation.

Blowing Robots

SmallEmpty
LargeEmpty
LargeColumns
LargeDivider
LargeCenter

Pushing Robots

SmallEmpty
LargeEmpty
LargeColumns
LargeDivider
LargeCenter

Acknowledgments

We would like to thank Naomi Leonard, Anirudha Majumdar, Naveen Verma, Yen-Chen Lin, Kevin Zakka, and Rohan Agrawal for fruitful technical discussions. This work was supported in part by the Princeton School of Engineering, as well as the National Science Foundation under IIS-1815070 and DGE-1656466.


Contact

If you have any questions, please feel free to contact Jimmy Wu.


Last update: June 30, 2022