Oct 4, 2016

Owlchemy Labs Innovates New Mixed Reality VR Techniques: Depth Sensing Camera, In-Game Compositing

Owlchemy Labs (maker of Job Simulator, one of the first Vive titles ever shown to the press) is one of the pioneering developers in the VR industry, and it's once again proving itself by innovating a new way to shoot in mixed reality.

Before the HTC Vive launched, Owlchemy Labs released a mixed reality trailer for Job Simulator as part of its marketing campaign. Soon after, Valve got involved and created tools to allow it to film mixed reality video, which it used for the Vive launch commercial. Valve later released instructions to the developer community that explained how to enable mixed reality rendering in Unity games.

Since then, developers, streamers, YouTubers and HTC have embraced mixed reality VR, but the process is cumbersome—to say the least. The process for mixed reality video today involves rendering foreground and background objects in separate windows and compositing them together with the camera footage stuck in between.

The results from the current mixed reality method are impressive, but Owlchemy thought it could do better. And so it did. Owlchemy Labs built a custom shader and custom plugin that allows the developer to record video with a depth-sensing camera. In other words, Owlchemy is bringing three-dimensional video footage directly into the game through Unity and compositing it in-game. It doesn’t rely on third-party software to composite the scene.

Owlchemy Labs figured out a way to bring 1080p video with depth data at 30fps into the game, composite it together with the game with accurate depth compensation, and render the game in stereoscopic VR inside the HMD without dipping below 90fps—all on one PC. 

By importing the depth camera data into Unity, the player becomes a Unity asset, just like any other 3D scan or render. The player’s scan can cast a shadow in the game environment, and it can react to dynamic lighting effects. It effectively becomes part of the game environment. Owlchemy can use this information to calculate per-pixel depth, which allows you to reach behind any object in the game, not just a pre-established foreground barrier.

No comments:

Post a Comment