Logo
Hyper Psychic Gauntlets

Hyper Psychic Gauntlets

available on SteamLogo9


 
Hyper Psychic Gauntlets is an unorthodox VR bullet hell where you have nothing but two telekinetic gauntlets to defend yourself against the guardians of the temple. Dodge lasers, deflect bullets and throw around enemies as you prove you’re worthy as the wielder of the gauntlets. Between split second thinking deciding whether to dodge or deflect and the constant movement needed to weave yourself around lasers and bullets, you’ll be drowning in sweat by the time you take off your headset.
 
The game was developed for a year and a half at SPIN VR and is available to play on Steam for all VR headsets. I lead the development of the game as the team’s programmer, art was done by Ryan Gao and Trym Roedder, and Connor Pannico contributed as a game designer during his internship with the studio.
 
Coverage for the game can be found on Power Spike Games.
 

C++

The main programming language I used to code the game.

Blueprints

Used along with C++ with developing the game

Unreal Engine

The engine used.

Oculus Rift

Used the Rift CV1 and Rift S to integrate support for them.

Oculus Touch

Used for motion controller support with the Rift.

HTC Vive

Used the Vive to integrate OpenVR support.

Valve Index Controllers

Used with the Vive to add support for the Index.

Acer Mixed Reality

Used the Acer headset to integrate Windows Mixed Reality support.

DirectX

The framework used when modifying plugins.

SVN

Used SVN for version control along with Unreal’s plugin.

Visual Studio

The IDE used with Unreal Engine.

UE4 Materials

Worked with Unreal’s materials to create dynamic materials.

Jira

Used to manage all tasks.

Blender

Used create game models mostly used for particle effects and skin/animate some boss animations.

Photoshop

Used to create and modify textures.

Illustrator

Used for pseudo code and problem solving.

Audacity

Used to modify and mash together sound effects.

Vegas Pro

Used to edit videos used in-game.

Google Sheets

Used to manage pre-release steam keys.

 
 
 
 

Inclusive Game Design


 
One of the largest variables when making enemy attacks in VR is the player’s height. The easiest way to get passed this is to target the player’s head directly, when say shooting, or add a bit of randomization so there’s more variation making players move around more to dodge. This works well but even more variation was needed when designing the boss attacks.
 
Level 1’s boss had 4 attack patterns that gradually get unlocked as it’s health declined. For the last attack I wanted to create a gigantic laser that constantly fired out from the boss for a few seconds, it fires a few meters above the player initially but gradually lowers forcing the player to duck. For this attack to be fair the player’s height would need to be recorded somehow, if the player was too tall they’d die easily or if they were short they wouldn’t have to duck at all. The easiest solution would be to ask the player to input their height during the initial setup or ask them to press a button while standing straight to record their height. The problem with these solutions is that it just adds more friction to the setup process and deters players from continuing to play, it also opened up the possibility of players putting in fake values to cheat.

Instead, I used a low cost background algorithm that started when the boss first spawned that gradually estimated the player’s ability to dodge. If the player was ducking for a more than a certain amount of time it would update a LowestHeightPossible variable and if the player was standing straight for a certain amount of time then the HighestHeightPossible variable would be updated. The first 3 attack patterns were designed specifically to get players to move around and dodge in all directions. This worked well as by the time the last attack activated there was more than enough data collected to accurately attack the player.

 

Optimizing Particles

HighresScreenshot00103
 
Various levels of optimizations were applied to HPG to make sure it ran at 90 fps at all times, one of the notable ones was with the particle effects. To start off I used GPU particles to create flashier effects while minimizing the load on the CPU, I also avoided using any transparencies and focused on creating more spark-like effects to avoid overdraw. This worked well but when players started killing multiple enemies per second the effects not only slowed down the game but made it hard to tell where enemies were.
 
Instead of compromising by using less particles, I created a class that constantly checked how many enemies were on screen and then spawned either a high quality or low quality effect. This required making 2 separate particles systems per effect but I found it was worth it as the game managed to look great even with just one enemy on screen but was still easily playable when 15 enemies were spawned at once.
 

Optimizing Detailed Maps

2020-10-02 15_08_50-VRshooterPrototype - Unreal Editor
 
2021-04-30 10_32_30-Window
One of the biggest optimizations I applied was to the environment itself. One day Trym Roedder, one of the 3D modelers, was asked to make a demo of the first level for the company’s web platform. To get this to work he made a 360 image of of the map and placed a single enemy in the center and it ended up running incredibly well while keeping the feel of the game.
 
It lacked the parallax effect you get in VR but I realized I could do the same thing in the actual game while maintaining parallax. The player is always within one meter from their start position and the environment was so large that the majority of the background didn’t noticeably change when you moved. Because of this I found I could maintain the parallax effect by keeping only the close foreground as models and converting the rest into a 360 map. This ended up working incredibly well and cut over 90% of the poly-count while looking not only the same but slightly better because we were able to apply subtle tweaks to the lighting on the 360 map using Photoshop. In terms of workflow I made a sublevel that contained all the 3D models that the 3D modelers worked in but put the 360 onto another sublevel and re-rendered it whenever a large change was made.
 

The Splitter Enemy

HighresScreenshot00435

2020-03-23 10_16_17-

For the first level the focus was on high speed dodging and attacking, but once the player finishes the level they would of learned enough to be able to kill enemies fast enough without feeling the need to dodge. To solve this I introduced the ‘Splitter’ enemy in level two. For the first two enemy types, you could just point your ray at them and grab them, but for the Splitter, if you tried to grab it, it would split into multiple tiny cubes that shoot lasers at you. The only way to kill it safely would be to shoot it with another enemy or to just leave it alone for a few seconds until it ran out of energy and died.
 
This added in an extra layer of difficulty forcing players to think before they tried to grab something. To take the load of the artists I made it so all movement was done in the code. By default there’s one model but when you grab it the smaller cubes are spawned in it’s place. From there they spread out from the center point, rotate upwards, and then rotate slowly towards the player while firing a laser.
 
 
 

Debugging Plugins

When implementing a spectator plugin that allowed people to watch players play in third person using custom avatars, I found that although the Unity version worked fine the Unreal version had an issue where the colors would darken when post-processing was used. The problem seemed to have been something that the developers couldn’t fix based on passed tickets so instead of waiting hoping for things to work out, I tried doing tests to see what I could do.
 
I’m pretty comfortable with reading and debugging other people’s code so I jumped right in. The first thing I noticed was that the colors would return to normal when the spectator camera was far enough, but at the same time the avatar would disappear. After some more tests I realized the plugin worked by drawing a background layer from the game screen, then the avatar on top and then the foreground layer meaning the blending of layers was breaking rather than the post-processing itself. Once I pinpointed this I started reading deeper into the code to see where the layers were being combined and found the code that seemed responsible:
 

 
Since I was using the newer technique, IsUsingNewTechnique was true. Despite the naming of PostprocessedForegroundTexture, this was the only texture being drawn every frame. I looked into the code before that and debugged the materials to see if anything went wrong in creating PostprocessedForegroundTexture but that seemed to work fine.
 
After looking into UE4’s source code I found that there were 2 extra arguments that could be added to K2_DrawTexture, one of which allowed you to set the blend mode which was defaulted to BLEND_Translucent. This seemed like it could be a cause so I started testing out other blend modes and right away everything was fixed once I set it to BLEND_Opaque. This saved us from abandoning the plugin altogether and we even managed to be one of the first UE4 games that used it with post-processing. The code in the end looked like this:
 

 
 

Images

ss_b73fdd1c2792432f2be724800b568a3be7d2e058.1920x1080

ss_ce9f7b775947b9b4be50612a16a22ed162dc64d6.1920x1080

ss_809cadb36986f24420ef6e7d9969f629570c6bbe.1920x1080

ss_2db2f246855d8267704e89ed868dfde6ef4c5a58.1920x1080

ss_6df667da235d1c5a4b5ca9a456b1cfc27a6a30b7.1920x1080

  • White Room
  • Test Subject 901