Sony Explains How to Make Interaction and Gunnery Feel Natural with PlayStation VR on PS4
At Game Developers Conference in San Francisco, Sony London Studio Principal Programmer Ronald De Feijter hosted a panel focused on how to make interactions feel natural in PlayStation VR, using PlayStation VR Worlds’ The London Heist as an example.
DualShockers attended the panel, and below you can check out a full recap of what was shared, alongside all the slides of the presentation. This should give you a good idea on how the studio is making sure that its virtual reality experiences actually feel as real as possible.
- The latest version of The London Heist consists of three parts: the interrogation, the heist itself and the getaway.
- The demo is played with two PlayStation Move controllers representing the player’s hands.
- The goal was to make the demo as intuitive as possible to make the player “forget” that he is using controllers. Based on reception, De Feijter thinks that they’re not “a million miles off of that target.”
- The initial introduction of PlayStation Move and other motion controls had a problem. While it could track your hand movement, it still had the issue of having to translate those movements on a 2D screen, causing inputs to be more like gestures than actual interactions. With the introduction of virtual reality the playing field changes, and high detail interactions are possible.
- In order to find out how to implement those interactions, the folks at Sony London Studio looked at how those interactions are in real life. There are many interactions that can be implemented as a result of grasping and moving objects.
- Translating the movement part to the screen is easy, but the grasp is slightly different depending on what you’re grasping, but we don’t consciously make that difference when we grab something. The developers didn’t want to bother the player with that decision either. They just want them to be able to grasp things. A single button is used for that, and the game fills in the detail and the context.
- That leaves the question of which button to use for grasping. With PlayStation Move it’s obvious: the trigger button.
- Objects are divided in three kinds: dynamic objects like a coffee cup, that you can freely move around. On the other hand of the spectrum are static objects. You can grasp them, but you can’t move them. The most interesting objects are in the middle, the constrained objects (like a door). They don’t have the same freedom of movement like dynamic objects, but they have at least one degree of freedom (IE: one direction in which they can move or rotate).
- Constrained objects are the most challenging to implement. You can grasp a drawer’s handle, but you can move it on only one axis.
- In real life you get visual and tactile feedback when you grasp a drawer’s handle. In virtual reality there’s no tactile feedback. To prevent frustration, there’s an “interaction range,” that lets you interact with an object if your hand is within a certain distance of the interaction point. Feedback is given to the player on whether they’re within that range by changing the pose of the hand from idle to a “reach” pose.
- The action could be performed in two obvious ways. The hand could remain connected to the controller, but that causes visual issues that break immersion. Attaching the hand to the drawer doesn’t cause visual issues, but there’s a disconnect with the hand, because no matter what you do on your controller besides sliding the drawer back and forth, the position of the hand won’t change. That also breaks immersion.
- The happy medium is that the hand attaches to the drawer, but its orientation while attached to the drawer is still controlled by the player via the Move. Using the direction of the move to determine the direction of the hand gives reasonable results, but it isn’t fully satisfying. The connection point between the hand and the handle is used as a pivot point. The direction in which the hand is pointed (yaw and pitch) is actually disconnected between the screen and the move, while the rotation of the hand remains controlled by the move. The direction is instead based on the position of the wrist. This give the most rewarding and immersive results. There are no graphical issues, but there still is a lot of movement.
- The studio also implemented the disconnect sphere. Initially they used the same radius when you start interacting with an object and when you stop interacting with it by moving your hand away. Yet, they found that when they have a lot of objects close together, it’s better to have a smaller interaction range, but they didn’t want to break the interaction too soon, so the disconnect range is bigger.
- There’s also a direction check, that makes sure that you interact with an object if your palm is facing it, and not with the back of your hand.
- The disconnection of the hand from the handle is controlled by releasing the trigger or by moving the hand too far away from the connection point.
- The same principles apply to other elements like cupboard and car doors, but certain parameters change, like the direction in which you can move them, or the way your hand needs to be oriented to interact. For instance, for the car door, the hand needs to be facing down.
- The visor doesn’t have just an interaction point, but an interaction line, letting you move your hand along its edge.
- Loading the gun is interesting, because the system has to take in account direction and position of both hands.
- The gameplay inends to make you feel like a Hollywood action hero, so it’s not fully realistic. It aims to make you replicate what you see in action movies and learned from them.
- You don’t need to aim down sight. You can go gangster style. It’s harder, but you can do anything to want.
- Attempting to load the gun triggers a slot ckeck. If the clip slot is empty, you can proceed. Next there’s a position check, making sure that your hand is positioned correctly under the gun. An alignment check follows, and then a speed check, that makes sure that you load the gun only when your second hand is moving towards the gun, and not away from it.
- If all the checks pass. loading is initiated, consisting of two parts. An alignment phase in which the clip interpolates to the bottom of the gun, followed by sliding it in the slot.
- The speed at which loading happens is not fixed, but it depends on the relative position of gun and clip to make it feel more natural. Loading works correctly whether you do it slowly or you slam it in.
- The interction system works with no problem with either hand. That way people can pick up the gun with whatever hand feels natural to them, and start shooting.
- Some times making gameplay intuitive means not using hand interactions at all, but for instance just using the direction of the head.
- The system needs to be kept simple to make it accessible and intuitive to the end user.
- Complexity can be added with the context, creating a whole range of interactions.
- It’s important to make the game “feel” real more than it actually being realistic. Players shouldn’t be required to have the skills of an action hero, but they should feel like one. Developers should focus on that.
- It’s also important to respect the rules set for your world. For instance in a realistic world, things that don’t make sense within that context should not happen.
If you’re a developer who didn’t have a chance to attend GDC, or this specific presentation, and you’re interested in the full audio recording, we’re happy to share. Just contact email@example.com, and we’ll send it your way.
[On location reporting: Steven Santana]