Research Overview
My research focuses on robotics, reinforcement learning, and computer vision. I also have a broad curiosity about computer science, mathematics, physics and art.
My current research interests are centered on facilitating the acquisition of generalizable, reliable,
and complex manipulation skills by intelligent systems, while simultaneously ensuring
their real-world deployable. At the same time, I am actively investigating
learning from interaction, which has the unique potential to significantly reduce the need
for perceptual data annotation and enable scalable training for large models.
Publications
|
OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning Research
Jiaming Ji*,
Jiayi Zhou*,
Borong Zhang*,
Juntao Dai,
Xuehai Pan,
Ruiyang Sun,
Weidong Huang.
Yiran Geng,
Mickel Liu,
Yaodong Yang
Arxiv
/
Code
/
Documentation
/
Community
Under Review 2023
OmniSafe is an infrastructural framework designed to accelerate safe reinforcement learning (RL) research.
|
|
ReDMan: Reliable Dexterous Manipulation with Safe Reinforcement Learning
Yiran Geng*,
Jiaming Ji*,
Yuanpei Chen*,
Haoran Geng,
Fangwei Zhong,
Yaodong Yang
Paper
/
Code
Under Review 2023
We introduce ReDMan, an open-source simulation platform that provides a standardized implementation of safe RL algorithms for Reliable Dexterous Manipulation.
|
|
SafePO: A Benchmark for Safe Policy Optimization
Jiaming Ji*,
Yiran Geng*,
Yifan Zhong,
Juntai Dai,
Long Yang,
Yaodong Yang
Paper
/
Code
Under Review 2023
This study offers a unified, highly-optimized, and extensible safe RL algorithm benchmark (SafePO), which benchmarks popular safe policy learning algorithms across a list of typical environments.
|
|
PartManip: Learning Part-based Cross-Category Object Manipulation Policy from Point Cloud Observations
Haoran Geng*,
Ziming Li*,
Yiran Geng,
Jiayi Chen,
Hao Dong,
He Wang
Arxiv
/
Project Page
CVPR 2023
We introduce a large-scale, part-based, cross-category object manipulation benchmark with tasks in realistic, vision-based settings.
|
|
RLAfford: End-to-End Affordance Learning for Robotic Manipulation
Yiran Geng*,
Boshi An*,
Haoran Geng,
Yuanpei Chen,
Yaodong Yang,
Hao Dong
ArXiv
/
Project Page
/
Video
/
Code
/
Code (Renderer)
/
Dataset
/
Media (CFCS)
/
Media (PKU)
ICRA 2023
In this study, we take advantage of visual affordance by using the contact information generated during the RL training process to predict contact maps of interest.
|
|
GraspNeRF: Multiview-based 6-DoF Grasp Detection for Transparent and Specular Objects Using Generalizable NeRF
Qiyu Dai*,
Yan Zhu*,
Yiran Geng,
Ciyu Ruan,
Jiazhao Zhang,
He Wang
ArXiv
/
Project Page
/
Code
ICRA 2023
We propose a multiview RGB-based 6-DoF grasp detection network, GraspNeRF, that leverages the generalizable neural radiance field (NeRF) to achieve material-agnostic object grasping in clutter.
|
|
GenDexGrasp: Generalizable Dexterous Grasping
Puhao Li*,
Tengyu Liu*,
Yuyang Li,
Yiran Geng,
Yixin Zhu,
Yaodong Yang,
Siyuan Huang
ArXiv
/
Project Page
/
Code
/
Dataset
/
Video
/
Media
ICRA 2023
This paper introduces GenDexGrasp, a versatile dexterous grasping method that can generalize to unseen hands.
|
|
MyoChallenge: Learning contact-rich manipulation using a musculoskeletal hand
Yiran Geng,
Boshi An,
Yifan Zhong,
Jiaming Ji,
Yuanpei Chen,
Hao Dong,
Yaodong Yang
Paper (Coming Soon)
/
Challenge Page
/
Code
/
Slides
/
Talk
/
Award
/
Media (BIGAI)
/
Media (CFCS)
/
Media (PKU-EECS)
/
Media (PKU-IAI)
/
Media (PKU)
/
Media (China Youth Daily)
First Place in NeurIPS 2022 Challenge Track (1st in 340 submissions from 40 teams)
To appear in PMLR
Reconfiguring a die to match desired goal orientations. This task require delicate coordination of various muscles to manipulate the die without dropping it.
|
|
Bi-DexHands: Towards Human-Level Bimanual Dexterous Manipulation
Yuanpei Chen,
Yiran Geng,
Fangwei Zhong,
Jiaming Ji,
Jiechuang Jiang,
Zongqing Lu,
Hao Dong,
Yaodong Yang
Paper (Long Version)
/
ArXiv (Short Version)
/
Project Page
/
Code
Long Version: T-PAMI 2023 Under Review
Short Version: NeurIPS 2022
We propose a bimanual dexterous manipulation benchmark according to literature from cognitive science for comprehensive reinforcement learning research.
|
|
GraspARL: Dynamic Grasping via Adversarial Reinforcement Learning
Tianhao Wu*,
Fangwei Zhong*,
Yiran Geng,
Yaodong Yang,
Yizhou Wang,
Hao Dong
ArXiv
This study is the first attempt to formulate the dynamic grasping problem as a “move-and-grasp” game and use adversarial RL to train the grasping policy and object moving strategies jointly.
|
Before 2022
|
Ministry of Education Talent Program Thesis (Physics Track)
Yiran Geng,
Haoran Geng,
Xintian Dong,
Yue Meng,
Xujv Sun,
Houpu Niu
PDF
Selected as the Outstanding Thesis of the National Physics Forum 2018
During my high school years, I was selected for the Ministry of Education Talent Program and conducted physics research at Nankai University and presenting my thesis at the National Physics Forum 2018.
|
Experience
 |
Massachusetts Institute of Technology (MIT)
2023.01 - Present
Visiting Researcher
Research Advisor: Prof. Josh Tenenbaum and Prof. Chuang Gan
|
 |
Beijing Institute for General Artificial Intelligence (BIGAI)
2022.05 - Present
Research Intern
Research Advisor: Prof. Yaodong Yang
Academic Advisor: Prof. Song-Chun Zhu
|
 |
Turing Class, Peking University (PKU)
2020.09 - Present
Undergraduate Student
Research Advisor: Prof. Hao Dong
Academic Advisor: Prof. Zhenjiang Hu
|
Service and Teaching
Reviewer: Conference on Neural Information Processing Systems (NeurIPS 2023)
Reviewer: International Conference on Computer Vision (ICCV 2023)
Reviewer: The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR 2023)
Teaching Assistant: Probability Theory and Statistics (A) (Spring 2023)
Teaching Assistant: Study and Practice on Topics of Frontier Computing (II) (Spring 2023) [Assignment]
Volunteer: Conference on Web and Internet Economics (WINE 2020)
|
Recent Talks
[May 13] "RLAfford: End-to-End Affordance Learning for Robotic Manipulation" at School of EECS Research Exhibition (Outstanding Presentation Award)
[Poster]
[Mar 4] "Research Experience Sharing - PKU Undergraduates' Perspective" for Tong Class
[Jan 30] "Recent Advances in Model Predictive Control" at BIGAI
[Video]
[Dec 7 (UTC time)] Winner presentation on NeurIPS 2022 MyoChallenge workshop
[Link
/
Slides]
[TBD] "Agent Manipulation Skill Learning through Reinforcement Learning" at CS Peer Talks at CFCS
[Link]
[TBD] "End-to-End Affordance Learning for Robotic Manipulation" at CFCS Frontier Storm at CFCS
[Poster]
|
Selected Awards and Honors
2023: Peking University Dean's Scholarship (¥5000 RMB)
2023: Peking University School of EECS Outstanding Research Presentation Award (¥1500 RMB)
2022: Beijing Institute for General Artificial Intelligence (BIGAI) Outstanding Contribution Award (¥6500 RMB)
2022: Center on Frontiers of Computing Studies (CFCS) Outstanding Student
2022: John Hopcroft Scholarship (¥3000 RMB)
2022: First Place in NeurIPS 2022 Challenge Track (€3000) 🏅
2022: Peking University Research Excellence Award
2022: Peking University Dean's Scholarship (¥15000 RMB)
2021: SenseTime Scholarship (Youngest winner, ¥20000 RMB)
2021: Peking University Study Excellence Award
2021: Peking University Qinjin Scholarship (¥10000 RMB)
2021: John Hopcroft Scholarship (¥3000 RMB)
2021: Ministry of Education Top Talent Program Scholarship (¥1000 RMB)
2019: Chinese Physics Olympiad (CPHO) Gold Medalist 🏅
2017: Rank 1st/100k in National High School Entrance Examination
|
|