Pengyuan Wang

Position: PhD Candidate
Office:
Chair for Computer Aided Medical Procedures & Augmented Reality
Fakultät für Informatik
Technische Universität München
Boltzmannstr. 3
85748 Garching bei München
MI.03.13.040
Email: pengyuan.wang@tum.de

Research Interest:
I am a PhD student at TUM CAMP working on 6D pose estimation of object for robotic applications. We have been working closely with our industry partners on cutting edge problems dealing with large amounts of objects. Feel free to contact me if you are instered in object pose estimation and have experience in 3D computer vision.

Student Thesis:
Available: 6D Pose Estimation without Forgetting (Guided Research) Link
Running: Mutliview Hand Pose Estimation under Occlusion Link
                Few Shot Learning for 6D Object Pose Estimation Link
                Self-Supervision for Transparent Category-level Pose Estimation Link
Finished: Deep Synthetic Polarimetric 6D Object Pose Estimation (Master Thesis,21SS)
                Light Source Estimation for Photorealistic Object Rendering (Guided Research,21SS)

News:
1 Submission accepted to ECCV 2022
1 Submission accepted to CVPR 2022
1 Paper accepted to IROS 2021

Projects:

PhoCaL: A 6D Pose Multi-Modal Dataset with Photometrically Challenging Objects
Object pose estimation is crucial for robotic applications and augmented reality. To provide a benchmark with high-quality ground truth annotations to the community, we introduce a multimodal dataset for category-level object pose estimation with photometrically challenging objects termed PhoCaL. PhoCaL comprises 60 high quality 3D models of household objects over 8 categories including highly reflective, transparent and symmetric objects. We developed a novel robot-supported multi-modal (RGB, depth, polarisation) data acquisition and annotation process. It ensures sub-millimeter accuracy of the pose for opaque textured, shiny and transparent objects, no motion blur and perfect camera synchronisation. PhoCAL Dataset Download [Link]

DemoGrasp: Few-Shot Learning for Robotic Grasping with Human Demonstration
Current 6D pose estimation networks support only a limited number of objects and require much training with either synthetic or real annotations, which is not suitable for robotic grasping in changing indoor environments. Therefore, we propose a method to estimate object shape along with grasp points on the fly with a sequence of RGB-D images. The learned knowledge is then leveraged to guide the robot to detect and grasp the object in the real environment. We conducted exhaustive experiments in both synthetic and real environments and our method outperforms SOTA grasping networks.

Teachings:
Project Couse: Advanced Topics in 3D Computer Vision (SS 2022)
Project Course: Praktikum on 3D Computer Vision (WS 2021/22)
Seminar Course: Recent Trends in 3D Computer Vision and Deep Learning (SS 2021)

List of Publications:
Polarimetric Pose Prediction, D. Gao, Y. Li, P. Ruhkamp, I. Skobleva, Ma. Wysock, H. Jung, P. Wang, A. Guridi, B. Busam, ECCV, 2022.
PhoCaL: A Multi-Modal Dataset for Category-Level Object Pose Estimation with Photometrically Challenging Objects, P. Wang *, H. Jung *, Y. Li, S. Shen, R. Srikanth, L. Garattoni, S. Meier, N. Navab, B. Busam, CVPR, 2022. * Equal Contribution
DemoGrasp: Few-Shot Learning for Robotic Grasping with Human Demonstration. P. Wang *, F. Manhardt *, L. Minciullo, L. Garattoni, S. Meier, N. Navab. and B. Busam. IROS 2021. * Equal Contribution