ICRA 2021 Reading List

This is my personal post-ICRA 2021 reading list with papers I came across while attending the conference and which I particularly want to read. It is also intended as a resource for my colleagues who did not attend ICRA this year.

Most papers will be about task- or motion-level robot learning, but many intersect with other domains as well. If you presented a paper at ICRA you think I would like and it is not on this list, please send me an email and I promise to read it!

Highlighted are papers I read (or attended the presentation) and found especially insightful. This list is subject to change as I read my way through it or add more from the proceedings.

  1. Kulak, T., Girgin, H., Odobez, J.-M., & Calinon, S. (2021). Active Learning of Bayesian Probabilistic Movement Primitives. IEEE Robotics and Automation Letters, 6(2), 2163–2170. https://doi.org/10.1109/LRA.2021.3060414
  2. Du, Y., Watkins, O., Darrell, T., Abbeel, P., & Pathak, D. (2021). Auto-Tuned Sim-to-Real Transfer. ArXiv:2104.07662 [Cs]. http://arxiv.org/abs/2104.07662 Link
  3. Tosatto, S., Chalvatzaki, G., & Peters, J. (2021). Contextual Latent-Movements Off-Policy Optimization for Robotic Manipulation Skills. ArXiv:2010.13766 [Cs]. http://arxiv.org/abs/2010.13766 Link
  4. Li, S., Park, D., Sung, Y., Shah, J. A., & Roy, N. (2021). Reactive Task and Motion Planning under Temporal Logic Specifications. ArXiv:2103.14464 [Cs]. http://arxiv.org/abs/2103.14464 Link
  5. Luo, S., Kasaei, H., & Schomaker, L. (2021). Self-Imitation Learning by Planning. ArXiv:2103.13834 [Cs]. http://arxiv.org/abs/2103.13834 Link
  6. Malla, S., Choi, C., & Dariush, B. (2021). Social-STAGE: Spatio-Temporal Multi-Modal Future Trajectory Forecast. ArXiv:2011.04853 [Cs]. http://arxiv.org/abs/2011.04853 Link
  7. Sidiropoulos, A., Karayiannidis, Y., & Doulgeri, Z. (2021). Human-robot collaborative object transfer using human motion prediction based on Cartesian pose Dynamic Movement Primitives. ArXiv:2104.03155 [Cs]. http://arxiv.org/abs/2104.03155 Link
  8. Ha, J.-S., Park, Y.-J., Chae, H.-J., Park, S.-S., & Choi, H.-L. (2021). Distilling a Hierarchical Policy for Planning and Control via Representation and Reinforcement Learning. ArXiv:2011.08345 [Cs]. http://arxiv.org/abs/2011.08345 Link
  9. Lechner, M., Hasani, R., Grosu, R., Rus, D., & Henzinger, T. A. (2021). Adversarial Training is Not Ready for Robot Learning. ArXiv:2103.08187 [Cs]. http://arxiv.org/abs/2103.08187 Link
  10. Johns, E. (2021). Coarse-to-Fine Imitation Learning: Robot Manipulation from a Single Demonstration. ArXiv:2105.06411 [Cs]. http://arxiv.org/abs/2105.06411 Link
  11. Sundermeyer, M., Mousavian, A., Triebel, R., & Fox, D. (2021). Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes. ArXiv:2103.14127 [Cs]. http://arxiv.org/abs/2103.14127 Link
  12. Sathya, A. S., Pipeleers, G., Decré, W., & Swevers, J. (2021). A Weighted Method for Fast Resolution of Strictly Hierarchical Robot Task Specifications Using Exact Penalty Functions. IEEE Robotics and Automation Letters, 6(2), 3057–3064. https://doi.org/10.1109/LRA.2021.3063026
  13. Dinev, T., Merkt, W., Ivan, V., Havoutis, I., & Vijayakumar, S. (2021). Sparsity-Inducing Optimal Control via Differential Dynamic Programming. ArXiv:2011.07325 [Cs]. http://arxiv.org/abs/2011.07325 Link
  14. Power, T., & Berenson, D. (2021). Keep it Simple: Data-efficient Learning for Controlling Complex Systems with Simple Models. ArXiv:2102.02493 [Cs]. http://arxiv.org/abs/2102.02493 Link
  15. Boggess, K., Chen, S., & Feng, L. (2021). Towards Personalized Explanation of Robot Path Planning via User Feedback. ArXiv:2011.00524 [Cs]. http://arxiv.org/abs/2011.00524 Link
  16. Pairet, È., Chamzas, C., Petillot, Y., & Kavraki, L. E. (2021). Path Planning for Manipulation using Experience-driven Random Trees. IEEE Robot. Autom. Lett., 6(2), 3295–3302. https://doi.org/10.1109/LRA.2021.3063063 Link
  17. Wen, H., Chen, X., Papagiannis, G., Hu, C., & Li, Y. (2021). End-To-End Semi-supervised Learning for Differentiable Particle Filters. ArXiv:2011.05748 [Cs, Stat]. http://arxiv.org/abs/2011.05748 Link
  18. Wulfmeier, M., Byravan, A., Hertweck, T., Higgins, I., Gupta, A., Kulkarni, T., Reynolds, M., Teplyashin, D., Hafner, R., Lampe, T., & Riedmiller, M. (2021). Representation Matters: Improving Perception and Exploration for Robotics. ArXiv:2011.01758 [Cs, Stat]. http://arxiv.org/abs/2011.01758 Link
  19. Liu, J., Zeng, W., Urtasun, R., & Yumer, E. (2021). Deep Structured Reactive Planning. ArXiv:2101.06832 [Cs]. http://arxiv.org/abs/2101.06832 Link
  20. Allshire, A., Martín-Martín, R., Lin, C., Manuel, S., Savarese, S., & Garg, A. (2021). LASER: Learning a Latent Action Space for Efficient Reinforcement Learning. ArXiv:2103.15793 [Cs]. http://arxiv.org/abs/2103.15793 Link
  21. Chen, A. S., Nam, H. J., Nair, S., & Finn, C. (2021). Batch Exploration with Examples for Scalable Robotic Reinforcement Learning. IEEE Robot. Autom. Lett., 6(3), 4401–4408. https://doi.org/10.1109/LRA.2021.3068655 Link
  22. Morgan, A. S., Nandha, D., Chalvatzaki, G., D’Eramo, C., Dollar, A. M., & Peters, J. (2021). Model Predictive Actor-Critic: Accelerating Robot Skill Acquisition with Deep Reinforcement Learning. ArXiv:2103.13842 [Cs]. http://arxiv.org/abs/2103.13842 Link
  23. Lee, T. E., Zhao, J., Sawhney, A. S., Girdhar, S., & Kroemer, O. (2021). Causal Reasoning in Simulation for Structure and Transfer Learning of Robot Manipulation Policies. ArXiv:2103.16772 [Cs]. http://arxiv.org/abs/2103.16772 Link
  24. Bechtle, S., Hammoud, B., Rai, A., Meier, F., & Righetti, L. (2020). Leveraging Forward Model Prediction Error for Learning Control. ArXiv:2011.03859 [Cs]. http://arxiv.org/abs/2011.03859 Link
  25. Akbulut, M. T., Bozdogan, U., Tekden, A., & Ugur, E. (2020). Reward Conditioned Neural Movement Primitives for Population Based Variational Policy Optimization. ArXiv:2011.04282 [Cs]. http://arxiv.org/abs/2011.04282 Link
  26. Cioflan, C., & Timofte, R. (2020). MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search. ArXiv:2009.13940 [Cs]. http://arxiv.org/abs/2009.13940 Link
  27. Wu, Y., Mozifian, M., & Shkurti, F. (2020). Shaping Rewards for Reinforcement Learning with Imperfect Demonstrations using Generative Models. ArXiv:2011.01298 [Cs]. http://arxiv.org/abs/2011.01298 Link
  28. Meyer, G. P., Charland, J., Pandey, S., Laddha, A., Gautam, S., Vallespi-Gonzalez, C., & Wellington, C. K. (2020). LaserFlow: Efficient and Probabilistic Object Detection and Motion Forecasting. ArXiv:2003.05982 [Cs]. http://arxiv.org/abs/2003.05982 Link
  29. Ho, D., Rao, K., Xu, Z., Jang, E., Khansari, M., & Bai, Y. (2020). RetinaGAN: An Object-aware Approach to Sim-to-Real Transfer. ArXiv:2011.03148 [Cs]. http://arxiv.org/abs/2011.03148 Link
  30. Haidu, A., & Beetz, M. (2020). Automated acquisition of structured, semantic models of manipulation activities from human VR demonstration. ArXiv:2011.13689 [Cs]. http://arxiv.org/abs/2011.13689 Link
  31. Lutter, M., Silberbauer, J., Watson, J., & Peters, J. (2020). Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning. ArXiv:2011.01734 [Cs]. http://arxiv.org/abs/2011.01734 Link
  32. Lee, Y., Hu, E. S., Yang, Z., Yin, A., & Lim, J. J. (2019). IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks. ArXiv:1911.07246 [Cs]. http://arxiv.org/abs/1911.07246 Link
  33. Driess, D., Ha, J.-S., Tedrake, R., & Toussaint, M. Learning Geometric Reasoning and Control for Long-Horizon Tasks from Visual Input. 8.
PREVIOUSKnowledge Representation & Reasoning in Industrial Robotics
NEXTRobot Program Parameter Inference via Differentiable Shadow Program Inversion