Leap Motion + Unity3D: Playing with virtual blocks using my real hands!

Felcjo Ringo
5 min readMay 2, 2019

There have been some cool demos put out by Leap Motion recently, in which they play with blocks via gestures. The hand sensing tech in their videos looks quite polished, and so I thought it would be cool to replicate something similar.

Leap Motion’s own Blocks demo

Leap Motion offers an SDK for Unity3D, and it is thankfully quite easy to set up. Simply plug in the Leap Motion controller, download the Leap Motion SDK, and import the package into Unity. In the SDK, you can find the hand models have their own prefabs — simply drag them from the Project window into the Scene. From there, start the Unity game and wave your hands around and above the Motion controller. You should see a pretty clean 1:1 response in the Game window. The latency is low because the controller runs at an amazing 15 fps. This, plus latency introduced by the USB and Unity layers, is still less than 100ms, or the period in which we humans perceive as instantaneous (Miller 68).

How should hands interact with the virtual world?

Now that we have our virtual hands floating about, how do we want them to interact with the world? In traditional games, you use an external controller to control and manipulate entities in the virtual world. Now, however, we are a part of that virtual world and need to interact with it somehow. One idea is to put buttons in the world and press them like we would press buttons on a traditional controller. Another idea is to use predefined gestures such as pinching, pointing, swiping, etc that all act as discrete inputs in the way that buttons do. And of course, we can literally interact with other objects in the virtual world, and work with spatial information such as distance to objects, collisions, and triggers. I believe the future of hands-only gaming will use all of these interaction modalities. As well, in this post I will show how all of them may be used.

Gesture Control: Moving and Scaling Cubes via Pinching

Leap Motion recently came out with a Unity module for gestures, called Interaction Engine. This is part of a larger push to make it easier for developers to use gestures in their games. In making the Interaction Engine, they took out the Gesture class from the regular Leap Motion SDK, forcing developers to use the new module. I actually found the Interaction Engine much more difficult to use, even more so to set up. For this reason, I chose to just write my own basic gesture recognition.

An easy way to tell if a hand is performing a pinching gesture is to measure the distance between two fingers. If that distance is small enough, it means that the fingers are pinching. Luckily for us, the Leap Motion SDK gives us in-engine hand models that exist in the engine’s 3D coordinate space. This means that we have a 3D coordinate for each joint and limb in our hands! With this information, we can get the distance between two fingers with a single line of code:

var leftDiff = (leftIndex.transform.position — leftThumb.transform.position).magnitude;

Now that we have defined a gesture, let’s see what we can do. I think a cool way to interact with objects such as cubes would be to move the object around with your hands, as well as scale it by “pulling” it or “compressing” it. To enter the interactive resizing mode, let’s say we just have to pinch with both our left and right hands. Once we have done that, the cube will stay at the midpoint of the two hands. For resizing, we need to know the vector distance between the two pinch points. From there, it would be wise to incorporate a scaling factor less than 1 so that the cube is not taking up space that our virtual hands are in. Basic code can be found below.

Hooray! We are able to transform and scale an object using our hands! But what if we want to interact with more than a single object?

Interacting with Multiple Objects

Interacting with multiple objects requires a deeper insight into Unity scripting. With a single object in our engine, it is easy to simply use that object all the time, as you just assign it in the inspector once and move on with your day. A problem comes along when you want to be able to interact with multiple objects.

First, let’s explore how we create clones of an object. The Instantiate() and Destroy() functions take care of this for us. Simply call Instantiate(prefab) in a function to create a clone of that object in the scene. To remove the object from the scene, call Destroy(prefab) — just hope you have a reference to the object in your script!

Let’s add a blue “button” to our scene. Upon touching the button, have it instantiate a maximum of 3 cubes. Let’s also add a red “button” that, when touched, destroys all active cubes from the scene.

When we run the game and try to interact with a cube, we see that we are interacting with all three! This is due to the fact that we didn’t specify which cube we want to work with! To fix this, let’s try calling the cube by their names when we want to interact with them — or more simply, let’s only interact with the cube we touched most recently. Before we do this, notice that all instantiated/cloned cubes have the same name — “Cube(Clone)”. This can be fixed by assigning the cube the number in which it was instantiated: in cuberesizer.cs, which is attached to the cube, put this.name = this.name + numCubes.ToString(); in void Setup().

Let’s see how we can choose a specific cube we want to interact with. The cubes have box colliders attached to them, meaning that our virtual hands will bump against them and move them. To interact with colliders, Unity offers a few functions starting with OnCollision*(). We will use OnCollisionEnter() which is called in the frame that our hand comes into contact with a cube. When we touch a cube, we want to interact with that one from now on, so we can add the line currentCubeName = this.name. In our Update() function, we can put all of our previous code in an if (currentCubeName == this.name) {} to make sure that we only operate on the current cube.

Voila!

--

--