PS4 SDK 2.0 Revealed: Includes Lots of Interesting New Tech and Features Game Developers Can Use

on November 23, 2014 12:32 PM

During a presentation at CEDEC in Japan, Sony Computer Entertainment revealed the features of the PS4 SDK 2.0, released to support the features of the console’s 2.0 firmware and to provide exciting new tech that game developers can use.

Below you can read a summary of the presentation (as reported by the Japanese website PC Watch), and our translation of all the slides for your perusual.

Support for Share Play, themes, ShareFactory and YouTube upload was introduced:

PS4_SDK_2_01

PS4_SDK_2_02fix

PS4_SDK_2_03

PS4_SDK_2_04

PS4_SDK_2_05

PS4_SDK_2_06

PS4_SDK_2_07

Image recognition and tracking features have been added with libDepth and Vision LibrarylibDepth makes use of the two lenses of the PlayStation camera to precisely track the distance of an object from the camera itself. Processing is done on the software level as the camera does not have the hardware chip included in Kinect.

Since using the CPU for the processing would slow down the software, thanks to libDepth it’s performed by the GPU. This compresses the processing time to only 1.5 milliseconds, which is roughly 1/10 of a frame at 60 FPS.

PS4_SDK_2_08

PS4_SDK_2_09

PS4_SDK_2_10

The output depth data generated has a resolution of of 160 x 100 dots in 1.5 milliseconds.  The process is scalable since it doesn’t use dedicated hardware, so it can be made cheaper by lowering resolution. The weakness of this system is that it uses only camera data, so it’s influenced by darkness. Luckily the algorithm has been improved to provide sufficient data with just 10 lux of light, which is pretty dark.

PS4_SDK_2_11

PS4_SDK_2_12

It’s possible to use libDepth for a camera-based user interface by tracking the hand. This is done by detecting the “peak”  of the tracking information. To put it down simply, it tracks the part of the body closest to the camera. This can be done at a very high frame rate, allowing a very smooth application.

PS4_SDK_2_13

PS4_SDK_2_14

PS4_SDK_2_15

PS4_SDK_2_16

The camera can also reciognize and track the head of a user not based on facial recognition, but simply on the shape and size of his head. It can track the head reliably even when it moves violently.

PS4_SDK_2_17

PS4_SDK_2_18

libDepth isn’t the only library provided to developers. LibFace is a face recognition software that can recognize and track different parts of a human’s face, like eyes, nose and lips. It can also detect expression, age and gender.

LibHand recognizes and tracks a human hand, comparing it with a library so that the console can actually distinguish between different hands. It can be used together with libDepth.

PS4_SDK_2_19fix

libSmart detects and tracks a planar surface.  It can be used for augmented reality, for instance to detect the location of the floor in the real world so that the console can build images on top of it.

The AR Dynamic Lighting (which we saw a few months ago) has also been implemented. It introduces the ability to add realistic shading to augmented reality images.

PS4_SDK_2_20

It detects the intensity, position and estimated color of light sources in real life, allowing the console to create a natural AR image instead of one that sticks out like a sore thumb on the background because it isn’t correctly lit and shaded.

This is normally very difficult, so a simple algorithm has been implemented. The light sources are calculated by dividing the floor in a 3X3 grid, and then comparing the weighed averages of light intensity and color on each portion. In order to facilitate the calculation, the player uses a flashlight.

PS4_SDK_2_21

PS4_SDK_2_22

PS4_SDK_2_23

PS4_SDK_2_24

PS4_SDK_2_25

PS4_SDK_2_26

The Heterogenous Dual Camera approach has also been introduced. It’s possible to change the settings of the two lenses of the camera separately. This allows special photography like capturing fast-moving objects by merging the data from one lens set on higher shutter speed with normal photographic data from the other lens.

User recognition is also facilitated. One lens can recognize the user, while the other can detect the light of the controller.

It’s finally possible to scale the resolution of the camera-acquired image in order to lower the processing cost.

The optimized Physics simulation libraries included in the SDK have also been detailed. A combination of GPU and CPU physical simulations is provided. The CPU is used for simulating objects that require a fine control because they’re involved with the game’s logic. The GPU approach allows for large-scale simulations of objects not directly involved with the game’s logic. Both approaches can be used at the same time.

PS4_SDK_2_27

The GPU physics simulation allows for large-scale implementations of rigid bodies, which are normally very costly in processing power. By using the compute shader, it can be made 10 times faster than what a CPU can do.

PS4_SDK_2_28

The demo showcased 1,000 rigid bodies simulated with the CPU, while the GPU managed to achieve a higher performance with 10,000. The demo was also repeated with 4,000 yellow ducks.

PS4_SDK_2_29

PS4_SDK_2_30

PS4_SDK_2_31

PS4_SDK_2_32

The physics simulation also supports vast spaces, allowing the use in open world games, and continuous collision detection, calculating collisions even after the moment of the collision itself.

PS4_SDK_2_33

The GPU of the PS4 is able to perform this kind of task because it can execute general purpose computing. On the other hand, this wasn’t possible for the GPU of the PS3, that had to use the Cell’s SPUs for the purpose. This is enhanced by the unified memory of the PS4, which is all very fast due to the use of GDDR5. This is one of the strong points of the PS4 according to the presenter (Sony Computer Entertainment SVP Technology Platform Yutaka Teiji).

PS4_SDK_2_34

Since the CPU and GPU share the virtual memory space, it’s also possible to easily pass data between the two. This allows to make GPU and CPU physics simulations work together with no issue.

Finally, the PSVR speech recognition library has been made available to developer. It’s the same software used by the new and much improved voice recognition system that came with firmware 2.0.

The engine recognizes and defines words in a format called “Grammar,” and returns them as input. The SDK also included tools to create new definitions and supports seven different languages.

PS4_SDK_2_35fixed

That’s quite a lot of interesting tech introduced all at once. It’ll definitely be interesting to see how developers will make use of all those shiny toys in the future.

The ability for developers to use the new voice recognition software is especially interesting, considering that it improved the previous implementation considerably. It’d be nice to be able to reliably control gameplay with voice controls on Sony’s console.

[Translation by : Griffin Tatum]

 /  Executive News Editor
Hailing from sunny (not as much as people think) Italy and long standing gamer since the age of Mattel Intellivision and Sinclair ZX Spectrum. Definitely a multi-platform gamer, he still holds the old dear PC nearest to his heart, while not disregarding any console on the market. RPGs (of any nationality) and MMORPGs are his daily bread, but he enjoys almost every other genre, prominently racing simulators, action and sandbox games. He is also one of the few surviving fans of the flight simulator genre on Earth.