ICRA 2021 Reading List

This is my personal post-ICRA 2021 reading list with papers I came across while attending the conference and which I particularly want to read. It is also intended as a resource for my colleagues who did not attend ICRA this year.

Most papers will be about task- or motion-level robot learning, but many intersect with other domains as well. If you presented a paper at ICRA you think I would like and it is not on this list, please send me an email and I promise to read it!

Highlighted are papers I read (or attended the presentation) and found especially insightful. This list is subject to change as I read my way through it or add more from the proceedings.

  1. T. Kulak, H. Girgin, J.-M. Odobez, and S. Calinon, “Active Learning of Bayesian Probabilistic Movement Primitives,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 2163–2170, Apr. 2021, doi: 10.1109/LRA.2021.3060414.
  2. Y. Du, O. Watkins, T. Darrell, P. Abbeel, and D. Pathak, “Auto-Tuned Sim-to-Real Transfer,” arXiv:2104.07662 [cs], May 2021.  Link
  3. S. Tosatto, G. Chalvatzaki, and J. Peters, “Contextual Latent-Movements Off-Policy Optimization for Robotic Manipulation Skills,” arXiv:2010.13766 [cs], May 2021.  Link
  4. S. Li, D. Park, Y. Sung, J. A. Shah, and N. Roy, “Reactive Task and Motion Planning under Temporal Logic Specifications,” arXiv:2103.14464 [cs], Mar. 2021.  Link
  5. S. Luo, H. Kasaei, and L. Schomaker, “Self-Imitation Learning by Planning,” arXiv:2103.13834 [cs], Mar. 2021.  Link
  6. S. Malla, C. Choi, and B. Dariush, “Social-STAGE: Spatio-Temporal Multi-Modal Future Trajectory Forecast,” arXiv:2011.04853 [cs], Mar. 2021.  Link
  7. A. Sidiropoulos, Y. Karayiannidis, and Z. Doulgeri, “Human-robot collaborative object transfer using human motion prediction based on Cartesian pose Dynamic Movement Primitives,” arXiv:2104.03155 [cs], Apr. 2021.  Link
  8. J.-S. Ha, Y.-J. Park, H.-J. Chae, S.-S. Park, and H.-L. Choi, “Distilling a Hierarchical Policy for Planning and Control via Representation and Reinforcement Learning,” arXiv:2011.08345 [cs], Apr. 2021.  Link
  9. M. Lechner, R. Hasani, R. Grosu, D. Rus, and T. A. Henzinger, “Adversarial Training is Not Ready for Robot Learning,” arXiv:2103.08187 [cs], Mar. 2021.  Link
  10. E. Johns, “Coarse-to-Fine Imitation Learning: Robot Manipulation from a Single Demonstration,” arXiv:2105.06411 [cs], May 2021.  Link
  11. M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox, “Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes,” arXiv:2103.14127 [cs], Mar. 2021.  Link
  12. A. S. Sathya, G. Pipeleers, W. Decré, and J. Swevers, “A Weighted Method for Fast Resolution of Strictly Hierarchical Robot Task Specifications Using Exact Penalty Functions,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3057–3064, Apr. 2021, doi: 10.1109/LRA.2021.3063026.
  13. T. Dinev, W. Merkt, V. Ivan, I. Havoutis, and S. Vijayakumar, “Sparsity-Inducing Optimal Control via Differential Dynamic Programming,” arXiv:2011.07325 [cs], Mar. 2021.  Link
  14. T. Power and D. Berenson, “Keep it Simple: Data-efficient Learning for Controlling Complex Systems with Simple Models,” arXiv:2102.02493 [cs], Feb. 2021.  Link
  15. K. Boggess, S. Chen, and L. Feng, “Towards Personalized Explanation of Robot Path Planning via User Feedback,” arXiv:2011.00524 [cs], Mar. 2021.  Link
  16. È. Pairet, C. Chamzas, Y. Petillot, and L. E. Kavraki, “Path Planning for Manipulation using Experience-driven Random Trees,” IEEE Robot. Autom. Lett., vol. 6, no. 2, pp. 3295–3302, Apr. 2021, doi: 10.1109/LRA.2021.3063063.  Link
  17. H. Wen, X. Chen, G. Papagiannis, C. Hu, and Y. Li, “End-To-End Semi-supervised Learning for Differentiable Particle Filters,” arXiv:2011.05748 [cs, stat], Mar. 2021.  Link
  18. M. Wulfmeier et al., “Representation Matters: Improving Perception and Exploration for Robotics,” arXiv:2011.01758 [cs, stat], Mar. 2021.  Link
  19. J. Liu, W. Zeng, R. Urtasun, and E. Yumer, “Deep Structured Reactive Planning,” arXiv:2101.06832 [cs], Apr. 2021.  Link
  20. A. Allshire, R. Martín-Martín, C. Lin, S. Manuel, S. Savarese, and A. Garg, “LASER: Learning a Latent Action Space for Efficient Reinforcement Learning,” arXiv:2103.15793 [cs], Mar. 2021.  Link
  21. A. S. Chen, H. J. Nam, S. Nair, and C. Finn, “Batch Exploration with Examples for Scalable Robotic Reinforcement Learning,” IEEE Robot. Autom. Lett., vol. 6, no. 3, pp. 4401–4408, Jul. 2021, doi: 10.1109/LRA.2021.3068655.  Link
  22. A. S. Morgan, D. Nandha, G. Chalvatzaki, C. D’Eramo, A. M. Dollar, and J. Peters, “Model Predictive Actor-Critic: Accelerating Robot Skill Acquisition with Deep Reinforcement Learning,” arXiv:2103.13842 [cs], Mar. 2021.  Link
  23. T. E. Lee, J. Zhao, A. S. Sawhney, S. Girdhar, and O. Kroemer, “Causal Reasoning in Simulation for Structure and Transfer Learning of Robot Manipulation Policies,” arXiv:2103.16772 [cs], Mar. 2021.  Link
  24. S. Bechtle, B. Hammoud, A. Rai, F. Meier, and L. Righetti, “Leveraging Forward Model Prediction Error for Learning Control,” arXiv:2011.03859 [cs], Nov. 2020.  Link
  25. M. T. Akbulut, U. Bozdogan, A. Tekden, and E. Ugur, “Reward Conditioned Neural Movement Primitives for Population Based Variational Policy Optimization,” arXiv:2011.04282 [cs], Nov. 2020.  Link
  26. C. Cioflan and R. Timofte, “MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search,” arXiv:2009.13940 [cs], Sep. 2020.  Link
  27. Y. Wu, M. Mozifian, and F. Shkurti, “Shaping Rewards for Reinforcement Learning with Imperfect Demonstrations using Generative Models,” arXiv:2011.01298 [cs], Nov. 2020.  Link
  28. G. P. Meyer et al., “LaserFlow: Efficient and Probabilistic Object Detection and Motion Forecasting,” arXiv:2003.05982 [cs], Oct. 2020.  Link
  29. D. Ho, K. Rao, Z. Xu, E. Jang, M. Khansari, and Y. Bai, “RetinaGAN: An Object-aware Approach to Sim-to-Real Transfer,” arXiv:2011.03148 [cs], Nov. 2020.  Link
  30. A. Haidu and M. Beetz, “Automated acquisition of structured, semantic models of manipulation activities from human VR demonstration,” arXiv:2011.13689 [cs], Nov. 2020.  Link
  31. M. Lutter, J. Silberbauer, J. Watson, and J. Peters, “Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning,” arXiv:2011.01734 [cs], Nov. 2020.  Link
  32. Y. Lee, E. S. Hu, Z. Yang, A. Yin, and J. J. Lim, “IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks,” arXiv:1911.07246 [cs], Nov. 2019.  Link
  33. D. Driess, J.-S. Ha, R. Tedrake, and M. Toussaint, “Learning Geometric Reasoning and Control for Long-Horizon Tasks from Visual Input,” p. 8.
PREVIOUSKnowledge Representation & Reasoning in Industrial Robotics
NEXTRobot Program Parameter Inference via Differentiable Shadow Program Inversion