profile photo

Hang Gao

I am a Ph.D. candidate at UC Berkeley, working on computer vision and graphics, advised by Angjoo Kanazawa.

I did my undergrad at Jiao Tong University and got my master from Columbia. In the past, I have interned at Microsoft Research Asia, Adobe Research and Luma AI.

This summer, I am doing an internship with Varun Jampani at Stability AI on 3D diffusion models.

Research

I am interested in modeling 3D dynamics in-the-wild. I have been working on non-rigid reconstruction and neural rendering. Recently, I start to explore and train generative priors from 3D data and videos.

SOAR: Self-Occluded Avatar Recovery from a Single Video
Zhuoyang Pan*, Angjoo Kanazawa, Hang Gao*
In submission, 2024

We recover human avatars from self-occluded internet videos where people only show parts or sides of their body.

Shape of Motion: 4D Reconstruction from a Single Video
Qianqian Wang*, Vickie Ye*, Hang Gao*, Jake Austin, Zhengqi Li, Angjoo Kanazawa
arXiv, 2024
project page / arXiv / code

We represent a 3D dynamic scene by 4D gaussians which allows accurate 3D tracking and dynamic-view synthesis.

NerfAcc: Efficient Sampling Accelerates NeRFs
Ruilong Li, Hang Gao, Matthew Tancik, Angjoo Kanazawa
ICCV, 2023
project page / arXiv / code

We build and release a toolbox for accelerating all kinds of NeRFs by efficient sampling.

Monocular Dynamic View Synthesis: A Reality Check
Hang Gao, Ruilong Li, Shubham Tulsiani, Bryan Russell, Angjoo Kanazawa
NeurIPS, 2022
project page / arXiv / video / code

We show a discrepancy between the practical captures and the existing experimental protocols in dynamic view synthesis from monocular video.

Long-term Human Motion Prediction with Scene Context
Zhe Cao, Hang Gao, Karttikeya Mangalam, Qi-Zhi Cai, Minh Vo, Jitendra Malik
ECCV, 2020   (Oral Presentation)
project page / arXiv / video / code

We predict long-term, diverse human motion in 3D by understanding scene context from an image.

Deformable Kernels: Adapting Effective Receptive Fields for Object Deformation
Hang Gao*, Xizhou Zhu*, Steve Lin, Jifeng Dai
ICLR, 2020
project page / arXiv / code

By learning an instance-adaptive convolutional operator through 2D deformation in kernel space, we can adapt the effective receptive field at runtime.

Spatio-Temporal Action Graph Networks
Roei Herzig*, Elad Levi*, Huijuan Xu*, Hang Gao, Eli Brosh, Xiaolong Wang, Amir Globerson, Trevor Darrell
ICCV Workshop, 2019
arXiv

We model video as a spatial-temporal relational graph for action recognition and find that the second order affinity (affinity between edges) is surprisingly helpful.

Disentangling Propagation and Generation for Video Prediction
Hang Gao*, Huazhe Xu, Qi-Zhi Cai, Ruth Wang, Fisher Yu, Trevor Darrell
ICCV, 2019
arXiv

High fidelity video prediction is easier if we disentangle the flow propagation from frame generation.

Low-shot Learning via Covariance-Preserving Adversarial Augmentation Networks
Hang Gao, Zheng Shou, Alireza Zareian, Hanwang Zhang, Shih-Fu Chang
NeurIPS, 2018
arXiv

We use learned feature augmentation to train low-shot classifiers.

AutoLoc: Weakly-supervised Temporal Action Localization in Untrimmed Videos
Zheng Shou, Hang Gao, Lei Zhang, Kazuyuki Miyazawa, Shih-Fu Chang
ECCV, 2018
arXiv / code

We propose a weakly-supervised method for temporal action localization by maximizing the difference inside and outside the localization box.

ER: Early Recognition of Inattentive Driving Events Leveraging Audio Devices on Smartphones
Xiangyu Xu, Hang Gao, Jiadi Yu, Yingying Chen, Yanmin Zhu, Guangtao Xue, Minglu Li
INFOCOM, 2017
IEEE

We developed a audio-based early recognition system for inattentive driving events through Doppler effect.


Yet another Jon Barron website (with minor tweaks).
Last updated Jun 2024.