RoboCo Campaign in VR (and More!)

RoboCo Campaign in VR (and More!)

Today, we’re joined by Lead Game Designer Luke for an in-depth overview of our latest RoboCo dev efforts:

Closed Alpha feedback 

Our heartfelt thanks to everyone who tested the Closed Alpha for us and gave feedback! We have been reviewing, discussing, brainstorming, and concepting. Next comes some studio-level discussions where we prioritize features and improvements for before or after Early Access launch.

The campaign is playable in VR!

Those of you who played our sandbox-only VR mode at Closed Alpha will no longer have to wonder: We are happy to announce that the campaign of challenges is now playable in VR as well!

We’ve wanted to make the campaign properly playable in VR for, gosh, probably a couple years now. As a small team, we’ve learned to patiently remind ourselves that if we do one feature at a time, we’ll eventually check bigger items off the list of our ambitions. Still, this particular milestone has special significance for those of us who have been with the project from the start.

VR is deeply embedded in the DNA of this project: We began as a VR-only experience in 2017, and we even prototyped several challenges in that form. Both the desktop mode and the campaign that we know today came later, as we saw the potential to expand the audience and increase the fun factor of RoboCo.

As we did more work on the desktop mode, the way we handled things on desktop naturally diverged from the way we had handled them in VR. Over time this made it difficult to keep the two versions at feature parity. One major impediment was that the menu tablet in VR, while seeming similar to the menu on desktop, was set up in its own way. We decided to consolidate our handling of menus into the tablet in VR to make it easier to bring the main menu, challenge select, HUD, results, and tutorial UI to VR, and make it easier to maintain them moving forward. 

If you used VR at Closed Alpha and recall a HUD tool belt separate from the menu tablet, these are now consolidated. This continues our general trend of making sure VR players only have one place they need to look for information besides the environment and the robot itself.

We also added a virtual theater screen for the brief intro cutscene of each challenge.

As for building robots in VR, it feels just as tactile and intuitive as ever!

VR camera improvements

We’ve been improving our VR camera, which we were not particularly surprised to find was one of the features that scored the weakest in our Closed Alpha survey results.

We knew going in that we were trying something experimental with the VR camera in Active mode (driving the robot around), because RoboCo has some unusual needs for its camera:

  • RoboCo is third person, which is not unheard of in VR, but is less common.
  • Most third person VR games use a high vantage point and/or a relatively static camera angle, like Moss or Lucky’s Tale. This works well for games where a character of a known size, shape, and move set is traversing a big environment. But…
  • RoboCo involves a robot of unknown size, shape, and move set maneuvering back and forth within an often relatively small environment. While sometimes a high vantage point works, at other times you need a different camera angle.
  • Sometimes the robot is doing fine motor skill interactions within that environment, like using a claw to pick up a pen, so you may even need to zoom in close.
  • We can’t always predict what angle you need when, since we don’t necessarily know your goals or your robot’s move set.

This is why, on desktop, we offer a follow camera mode that tethers to the robot, but which you can also manually rotate or zoom. But in VR, smooth movement can make people motion sick, so it’s not for everyone and it makes more sense for us to offer as an alternative option than as a default.

This is how we came to develop our option of a Step Follow camera mode, which essentially is a camera that stays put until the robot moves past a threshold, and then updates when needed to a new position, height, or rotation. This gives it a “keep up with the robot” benefit that is philosophically similar to the Follow camera on desktop, but without the smooth motion that could be nauseating in VR, and while bearing in mind several key differences about VR, such as how players can move their head.

However, the version of this camera mode that people tried in Closed Alpha was a rough prototype and was paired with clunky controls for manually adjusting the camera, plus a settings panel that was out of date and didn’t work. Okay for prototyping but that’s about it!

No more! As with many things in VR, we imagine it will be easier to appreciate the differences once you are hands-on with VR again, but we’ll try to give a preview here.

First off, manually adjusting the camera is a brand-new experience. In both Edit and Active modes, when holding both grip buttons, you can now adjust the position, scale, and rotation of the world relative to you the player. If you’ve used similar functionality in Google Blocks or Tilt Brush, this should come pretty naturally. It is also quick to use, and eliminates the awkward toggling we used to have between the controls for the Active mode camera and the controls for moving the robot!

The Active mode camera in VR now has several modes and settings:

You can choose Step Follow, Smooth Follow, and Free modes. We’ve improved the defaults here, but advanced options will also let you customize to your preferences further.

For the Step Follow camera mode, there is now a UI that appears when you near the edge of a threshold so that it’s clearer what is going on, and easier for the player to visualize the thresholds if they customize the settings.

If you have a stronger stomach for VR, you can have a Smooth Follow experience more like a traditional desktop or console game.

If you want a Moss-like god view, you can position the camera to a high vantage point, and you can use Free camera mode if you want the camera static.

Challenge iteration

Thanks to all the awesome feedback, we continue to iterate on our challenges, add more secret objectives to challenges that didn’t yet have a full set as of Closed Alpha, and make the art even better.

Our secret 11th challenge (shh!) that we intentionally held out of the Closed Alpha just received its first art pass. It looks great, and we are super excited to not show you anything about it. :’D

Human systems refactoring

This is more of a behind-the-scenes thing, but we’ve been refactoring the code for our human systems. Refactoring means rewriting code in a way that is structured better but yields the same outcomes for the player’s current experience. That might sound like a strange use of time, but it can be critical in development.

Our human systems were essentially designer prototype code that we continued to build on top of due to engineering having their hands too full with other areas of the game to take over the code from design and rewrite it. Now that we have the chance to do that, the long term benefit is that it should be much easier to change and add human behavioral logic in the future. As we intend to launch as an Early Access game, building on a solid foundation is very important to ensure that we can respond to feedback and expand the game over time.

That’s all our updates for now! Thanks for following along with our progress. See you next time.