Demoing a VR game can be a weird experience. If you’re not careful you end up with someone blindly waving their head around, while a group of onlookers watches in bemusement. To turn this into a much better event for everyone we needed a second, shared screen to bring the whole experience together.
Now that the Gear VR is public we’ve been taking Smash Hit Plunder to various events to show it off and get some valuable user feedback. We’ve got a whole bunch of new ideas and improvements out of it, so it’s definitely been worth it.
Our original prototype ran on a laptop and a DK1, and one of the handy things about this is that the output is mirrored to the laptop’s screen letting us see exactly what the player is up to, where they’re looking and hopefully see what’s causing them problems. Moving to the Gear VR has been reasonably smooth (other than some optimisation work) but when it comes to user testing the lack of a mirrored screen is a bit of a pain.
Testing on a friend with just the Gear VR became a bit of an extended guessing game: “can you see a start button?”, “are you in a room?”, and my favorite, “can you see anything?”. Before going to GameCity, we knew we’d need a better way of handling this – enter the Map screen!
The map screen is essentially a windows build of our game, but with an entirely different set of scenes and menus. It uses Unity’s built-in networking to start a server, then waits for the Gear VR to connect as a client. We run our own local wifi access point so we don’t have to deal with NAT punch-through or similar issues. Importantly, although the Gear VR starts a network client and attempts to connect to the map screen, if it doesn’t connect it carries on as it would do anyway – the server is entirely optional and isn’t running any gameplay logic at all.
You can see the map screen running on the tv in the background here:
Once connected, the map screen receives updates from the Gear VR on what’s currently happening:
- -The current menu the player is on
- -The current score and time remaining
- -The position and orientation of the player in the world
- -Any important events that have happened (eg. picked up an item)
If the player is currently playing then we load the exact same map that they’re playing on, and use their position and orientation to place a proxy character at the same spot, then track that with a camera from above so we can see where they are and what they’re looking at.
That’s not really a lot of data to sync – note that we don’t even attempt to sync all of the objects that are moving due to physics in our game. Not only would that be a lot of data, but it wouldn’t really gain us much for what we’re using it for.
There’s two big advantages this gets us:
- 1. It removes the guessing game when we put someone in the Gear VR – we can see if they’re on the main menu, if they’ve doing the tutorial, or if they’re happily playing the game. Since some people get really quiet when in VR, it’s much easier than trying to ask them where they are all the time.
- 2. It provides a great extra for other people to watch while someone is playing. They get a general sense of what the game is about, and if they’ve just played it, they get to shout at their friends when they’re getting a better/worse score than they did. Additionally, people can see the player’s time remaining ticking down, which seems to make people more patient when we had a long line. In some ways – only a map meant that when you finally got to try on the unit, it was all the more magical, no “spoilers” to what you would see.
We’re definitely using it at all future events and playtesting from now on, and there’s a few things we’d like to add:
- –Phone battery percentage. We’ve found the battery life pretty good overall but when you’re there all day eventually it goes flat when someone’s playing, and that’s a bit harsh. A battery indicator on the shared screen would let us swap out the battery just before that happened.
- –Player recording. We’ve got all the data already, we just need to stream it out to disk on the server, then with a little extra magic we can play it back and watch out player’s from first person and really see what they’re seeing when they play.
- –Heatmaps. Again, we already have all the data being generated, so heatmaps of where players are going (and where they’re not) would be great to help us make better levels.
We’ve still not decided if we want to make the map screen something we give away with the game – we need to figure out how much players would actually gain from it in a non-event setting, and how much work it would take to bring it up to quality. But if you’re working on a Gear VR game and are demoing or playtesting it then it’s something you should seriously think about making yourself.
As for gameplay, the screen acts as a centre point for the room, allowing players who are not in the device to join in on the fun. Imagine the multiplayer possibilities!