UCLA Jiang Chen Fanfu’s Team Publishes New Research on Controlling 3D Objects in Virtual Reality and Gaussian Splash Rendering

Vision Pro is too expensive to play now, when can I fast forward to immersive 3D gaming? ?

Now there is a new study,Real-time control of 3D Gaussian splash-generated objects in VRa big step forward from real realization.

Advertisement

For example, if you operate this iron horse, you will be in trouble if you are not careful…

Another example is training this fierce dog.

Led by Jiang Chen Fanfu from the UCLA Artificial Intelligence and Visual Computing Laboratory, as well as from the University of Hong Kong, Zhejiang University, Style3D, CMU, University of Utah and Amazon, researchers have proposed a VR system VR-GS. Among them, I also saw Wang Huamin, a familiar graphics tycoon.

Advertisement

Let’s see specifically how it is implemented.

Play 3D Gaussian Splash in VR

In this study, the team mainly made the following three contributions:

  • A high-fidelity immersive VR system was developed and extensively evaluated.

  • Real-time 3D content interaction: The system is engineered with a human-centered focus.

  • Comprehensive system integration brings together technologies such as 3D Gaussian splatter, scene segmentation, image rendering, physics-based real-time solvers and new rendering geometry embedding algorithms.

certainly,The core is the proposed physical perception interactive VR system VR-GS.

As the name suggests, it integrates 3D Gaussian Splashing (GS) and Extended Position-Based Dynamics (XPBD). The latter is a highly adaptable and consistent physics simulator for real-time deformation simulations.

Since the simulation and rendering processes have different geometric representations, it is difficult to directly integrate the simulator into the 3D Gaussian kernel.

To solve this problem, the researchers built a tetrahedral cage that embeds each segmented group of Gaussian kernels into the corresponding grid. The deformation mesh driven by XPBD then guides the deformation of the GS core.

Starting from multi-view images, the pipeline cleverly combines scene reconstruction, segmentation and repair using Gaussian kernels. In addition, it further integrates collision detection, shadow casting and other technologies.

In addition, they also noticed that simple embedding would lead to some spike artifacts in the Gaussian kernel, so a two-level embedding method was proposed. Each Gaussian kernel is embedded into a local tetrahedron, and the vertices of the local tetrahedron are are independently embedded into the global grid.

The difference is quite obvious.

End-user feedback has received positive reviews in areas including ease of use, latency satisfaction, system functionality and overall satisfaction.

research team

This research was completed by researchers from UCLA, Hong Kong University, Zhejiang University, Style3D, University of Utah, CMU and Amazon.

Among them, Fanfu Jiang from the UCLA Artificial Intelligence and Visual Computing Laboratory led the project, with equal contributions from Ying Jiang, Chang Yu, Tianyi Xie, and Xuan Li.

Friends who are interested can click on the link below to research~

https://yingjiang96.github.io/VR-GS/

Advertisement