I am a fourth-year Ph.D. student in the Stanford Computational Imaging Laboratory, advised by Prof. Gordon Wetzstein. My research interest lies in 3D-structure-aware neural scene representations - a novel way for AI to represent information on our 3D world. My goal is to allow AI to perform intelligent 3D reasoning, such as inferring a complete model of a scene with information on geoemetry, material, lighting etc. from only few observations, a task that is simple for humans, but currently impossible for AI. I have previously worked on differentiable camera pipelines, VR and Human Perception.
I will join Prof. Noah Snavely's group at the Google NYC office over the summer and continue working
on deep learning for scene understanding and novel view synthesis.
Our paper "DeepVoxels: Learning Persistent 3D Feature Embeddings" was accepted to CVPR as an oral!
I will be in Los Angeles from June 16 to June 21 to present the paper.
Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations
NeurIPS 2019 (Oral)
DeepVoxels: Learning Persistent 3D Feature Embeddings
CVPR 2019 (Oral)
Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification
End-to-end Optimization of Optics and Image Processing for Achromatic Extended Depth of Field and Super-resolution Imaging
Saliency in VR: How do people explore virtual environments?
IEEE VR 2018
Movie Editing and Cognitive Event Segmentation in Virtual Reality Video
Towards a Machine-learning Approach for Sickness Prediction in 360° Stereoscopic Videos
IEEE VR 2018