Yiran Geng   |   耿逸然

I am a senior undergraduate student in the Turing class at the School of EECS, Peking University , co-advised by Prof. Hao Dong and Prof. Yaodong Yang. It is worth mentioning that I have a twin brother named Haoran Geng, as our experiences have shown that we are frequently mistaken for one another🤣.

Email  /  Google Scholar  /  Github /  Twitter /  WeChat /  Bilibili

profile photo
Research Overview

My research focuses on reinforcement learning, robotics, and computer vision. I also have a broad curiosity about computer science, mathematics, physics and art.
My current research interests are centered on facilitating the acquisition of generalizable, reliable, and complex manipulation skills by intelligent systems, while simultaneously ensuring their real-world deployable. At the same time, I am actively investigating learning from interaction, which has the unique potential to significantly reduce the need for perceptual data annotation and enable scalable training for large models.




Publications



ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation
Xiaoqi Li, Mingxu Zhang, Yiran Geng, Haoran Geng, Yuxing Long, Yan Shen, Renrui Zhang, Jiaming Liu, Hao Dong
Website / arXiv / 量子位
CVPR 2024, Accepted

We introduce an innovative approach that leverages the robust reasoning capabilities of Multimodal Large Language Models (MLLMs).



GitHub Repo stars
Grasp Multiple Objects with One Hand
Yuyang Li, Bo Liu, Yiran Geng, Puhao Li, Yaodong Yang, Yixin Zhu, Tengyu Liu, Siyuan Huang
Website / arXiv / Code / DataSet
RA-L 2024, Accepted

We propose MultiGrasp, a two-stage framework for simultaneous multi-object grasping with multi-finger dexterous hands.



RGBManip: Monocular Image-based Robotic Manipulation through Active Object Pose Estimation
Boshi An*, Yiran Geng*, Kai Chen*, Xiaoqi Li, Qi Dou, Hao Dong
Website / arXiv / Media (CFCS)
ICRA 2024, Accepted

We propose an image-only robotic manipulation framework with ability to actively perceive object from multiple perspectives during the manipulation process.

GitHub Repo stars GitHub Repo stars
Safety Gymnasium: A Unified Safe Reinforcement Learning Benchmark
Jiaming Ji*, Borong Zhang*, Jiayi Zhou*, Xuehai Pan, Weidong Huang, Ruiyang Sun, Yiran Geng, Juntai Dai, Yaodong Yang
Website / Code (Safety Gymnasium) / Code (SafePO) / Documentation (Safety Gymnasium) / Documentation (SafePO)
NeurIPS 2023, Accepted

We present an environment suite called Safety-Gymnasium, which encompasses safety-critical tasks in both single and multi-agent scenarios and a library of algorithms named Safe Policy Optimization (SafePO).



GitHub Repo stars
Bi-DexHands: Towards Human-Level Bimanual Dexterous Manipulation
Yuanpei Chen, Yiran Geng, Fangwei Zhong, Jiaming Ji, Jiechuang Jiang, Zongqing Lu, Hao Dong, Yaodong Yang
Paper (Long Version) / Paper (Short Version) / Project Page / Code
T-PAMI 2023, Accepted
NeurIPS 2022, Accepted

We propose a bimanual dexterous manipulation benchmark according to literature from cognitive science for comprehensive reinforcement learning research.



GitHub Repo stars
OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning Research
Jiaming Ji*, Jiayi Zhou*, Borong Zhang*, Juntao Dai, Xuehai Pan, Ruiyang Sun, Weidong Huang. Yiran Geng, Mickel Liu, Yaodong Yang
arXiv / Code / Documentation / Community
arXiv 2023

OmniSafe is an infrastructural framework designed to accelerate safe reinforcement learning (RL) research.



ReDMan: Reliable Dexterous Manipulation with Safe Reinforcement Learning
Yiran Geng*, Jiaming Ji*, Yuanpei Chen*, Haoran Geng, Fangwei Zhong, Yaodong Yang
Code
Machine Learning (Journal), Minor Revision

We introduce ReDMan, an open-source simulation platform that provides a standardized implementation of safe RL algorithms for Reliable Dexterous Manipulation.

GitHub Repo stars
PartManip: Learning Part-based Cross-Category Object Manipulation Policy from Point Cloud Observations
Haoran Geng*, Ziming Li*, Yiran Geng, Jiayi Chen, Hao Dong, He Wang
arXiv / Project Page / Code
CVPR 2023, Accepted

We introduce a large-scale, part-based, cross-category object manipulation benchmark with tasks in realistic, vision-based settings.



GitHub Repo stars

RLAfford: End-to-End Affordance Learning for Robotic Manipulation

Yiran Geng*, Boshi An*, Haoran Geng, Yuanpei Chen, Yaodong Yang, Hao Dong
arXiv / Project Page / Video / Code / Code (Renderer) / Dataset / Media (CFCS) / Media (PKU)
ICRA 2023, Accepted

In this study, we take advantage of visual affordance by using the contact information generated during the RL training process to predict contact maps of interest.

GitHub Repo stars

GraspNeRF: Multiview-based 6-DoF Grasp Detection for Transparent and Specular Objects Using Generalizable NeRF

Qiyu Dai*, Yan Zhu*, Yiran Geng, Ciyu Ruan, Jiazhao Zhang, He Wang
arXiv / Project Page / Code
ICRA 2023, Accepted

We propose a multiview RGB-based 6-DoF grasp detection network, GraspNeRF, that leverages the generalizable neural radiance field (NeRF) to achieve material-agnostic object grasping in clutter.



GitHub Repo stars
GenDexGrasp: Generalizable Dexterous Grasping
Puhao Li*, Tengyu Liu*, Yuyang Li, Yiran Geng, Yixin Zhu, Yaodong Yang, Siyuan Huang
arXiv / Project Page / Code / Dataset / Video / Media
ICRA 2023, Accepted

This paper introduces GenDexGrasp, a versatile dexterous grasping method that can generalize to unseen hands.



MyoChallenge: Learning contact-rich manipulation using a musculoskeletal hand
Yiran Geng, Boshi An, Yifan Zhong, Jiaming Ji, Yuanpei Chen, Hao Dong, Yaodong Yang
Paper / Challenge Page / Code / Slides / Talk / Award / Media (BIGAI) / Media (CFCS) / Media (PKU-EECS) / Media (PKU-IAI) / Media (PKU) / Media (China Youth Daily)
First Place in NeurIPS 2022 Challenge Track (1st in 340 submissions from 40 teams)

Reconfiguring a die to match desired goal orientations. This task require delicate coordination of various muscles to manipulate the die without dropping it.

Learning Part-Aware Visual Actionable Affordance for 3D Articulated Object Manipulation
Yuanchen Ju*, Haoran Geng*, Ming Yang*, Jiayi Chen, Yiran Geng, Yaroslav Ponomarenko, Taewhan Kim, He Wang, Hao Dong
Paper / Video / Workshop
CVPR 2023 @ 3DVR, Spotlight Presentation

We introduces Part-aware Affordance Learning methods. Our approach first learns a part prior, subsequently generating an affordance map.


GraspARL: Dynamic Grasping via Adversarial Reinforcement Learning
Tianhao Wu*, Fangwei Zhong*, Yiran Geng, Yaodong Yang, Yizhou Wang, Hao Dong
arXiv

This study is the first attempt to formulate the dynamic grasping problem as a “move-and-grasp” game and use adversarial RL to train the grasping policy and object moving strategies jointly.

Before 2022


Ministry of Education Talent Program Thesis (Physics Track)
Yiran Geng, Haoran Geng, Xintian Dong, Yue Meng, Xujv Sun, Houpu Niu
PDF
Selected as the Outstanding Thesis of the National Physics Forum 2018

During my high school years, I was selected for the Ministry of Education Talent Program and conducted physics research at Nankai University and presenting my thesis at the National Physics Forum 2018.


Experience

Massachusetts Institute of Technology (MIT)
2023.01 - 2023.10
Visiting Researcher
Research Advisor: Prof. Josh Tenenbaum and Prof. Chuang Gan
Beijing Institute for General Artificial Intelligence (BIGAI)
2022.05 - 2023.06
Research Intern
Research Advisor: Prof. Yaodong Yang
Academic Advisor: Prof. Song-Chun Zhu
Turing Class, Peking University (PKU)
2020.09 - Present
Undergraduate Student
Research Advisor: Prof. Hao Dong
Academic Advisor: Prof. Zhenjiang Hu

Service and Teaching

  • Reviewer: The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR 2024)
  • Reviewer: 2024 IEEE Robotics and Automation Letters (RAL 2024)
  • Reviewer: 2024 IEEE International Conference on Robotics and Automation (ICRA 2024)
  • Program Committee: The 38th Annual AAAI Conference on Artificial Intelligence (AAAI 2024)
  • Reviewer: Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS 2023)
  • Reviewer: International Conference on Computer Vision (ICCV 2023)
  • Reviewer: The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR 2023)
  • Teaching Assistant: Probability Theory and Statistics (A) (Spring 2023)
  • Teaching Assistant: Study and Practice on Topics of Frontier Computing (II) (Spring 2023) [Assignment]
  • Volunteer: Conference on Web and Internet Economics (WINE 2020)

  • Recent Talks

  • [August 20] Student Representative Speech at the Annual School Assembly of EECS [Manuscript]
  • [May 27] "RLAfford: End-to-End Affordance Learning for Robotic Manipulation" at Turing Student Forum (Outstanding Presentation Award) [Video]
  • [May 13] "Agent Manipulation Skill Learning through Reinforcement Learning" at School of EECS Research Exhibition (Outstanding Presentation Award) [Poster]
  • [Mar 4] "Research Experience Sharing - PKU Undergraduates' Perspective" for Tong Class
  • [Jan 30] "Recent Advances in Model Predictive Control" at BIGAI [Video]
  • [Dec 7 (UTC time)] Winner presentation on NeurIPS 2022 MyoChallenge workshop [Link / Slides]
  • [TBD] "Agent Manipulation Skill Learning through Reinforcement Learning" at CS Peer Talks at CFCS [Link]
  • [TBD] "End-to-End Affordance Learning for Robotic Manipulation" at CFCS Frontier Storm at CFCS [Poster]

  • Selected Awards and Honors

  • 2023: May Fourth Scholarship (Highest-level Scholarship for Peking University, 125/65k+, ¥12000 RMB)
  • 2023: Academic Innovation Award of Peking University
  • 2023: Undergraduate Research Outstanding Scholarship, President's Fund (¥4000 RMB)
  • 2023: Merit Student of Peking University
  • 2023: Peking University Dean's Scholarship (¥5000 RMB)
  • 2023: Second Place in ICCV 2023 Challenge Track 🥈
  • 2023: Peking University Education Foundation Scholarship (¥4500 RMB)
  • 2023: Outstanding Research Presentation Award in Turing Student Research Forum
  • 2023: Peking University Dean's Scholarship (¥5000 RMB)
  • 2023: Peking University School of EECS Outstanding Research Presentation Award
  • 2022: Beijing Institute for General Artificial Intelligence (BIGAI) Outstanding Contribution Award (¥6500 RMB)
  • 2022: Center on Frontiers of Computing Studies (CFCS) Outstanding Student
  • 2022: John Hopcroft Scholarship (¥3000 RMB)
  • 2022: First Place in NeurIPS 2022 Challenge Track (€3000) 🏅
  • 2022: Peking University Research Excellence Award
  • 2022: Peking University Dean's Scholarship (¥15000 RMB)
  • 2021: SenseTime Scholarship (Youngest winner, 30/year in China, ¥20000 RMB)
  • 2021: Peking University Study Excellence Award
  • 2021: Peking University Qinjin Scholarship (¥10000 RMB)
  • 2021: John Hopcroft Scholarship (¥3000 RMB)
  • 2021: Ministry of Education Top Talent Program Scholarship (¥1000 RMB)
  • 2019: Chinese Physics Olympiad (CPHO) Gold Medalist 🏅
  • 2017: Rank 1st/100k in National High School Entrance Examination (天津市中考裸分第一名) 🏅

  • This template is a modification to Jiayi Ni💗's website and Jon Barron's website.