This is the fourth of a series of articles that cover my journey learning and exploring one of the most powerful indie game creation tools available - Unity. I’ll be sharing my knowledge and discoveries over a 12 week period. Each week I’ll post primarily on the process of learning Unity 3D along with other topics such as the history of the game engine, the community, and prominent artists/creators.
Last week I covered creating and applying materials in Unity and creating prefabs components to populate your game scene. I’ve been using the Lynda.com course, “Unity 3D Essential Training” by Craig Barr as a guide.
Creating Levels in Unity
Assembling scenes in Unity is actually very easy, with the use of things like prefabs, and snapping, and also taking advantage of pivot points for placement of objects… - Craig Barr
The term “level building” means combining components of your scene (models, lighting, textures, camera, animations and special effects) so that the player has a “level” to play through. A full game is often a combination of levels strung together in a progression that is fun to play (and challenging as well).
Unity makes creating levels simple and easy. Since the majority of your scene components are going to be created in another application (Blender, Maya, et al) the important idea to remember is to make sure to place your creation at the origin point (0,0,0) in the other application before you export/import to Unity. This makes it much easier to place the object when you bring it into your level.
Another important point is to add/combine your prefab components (walls, ceiling, the floor for example) to the heirarchy window instead of directly into the scene. By dragging and dropping the component to the hierarchy it goes directly to the origin point where you can place it at the position you desire. If you place the component in the scene window it will go to the spot you drop it to in 3D space where could make it hard to find.
Another concept that Mr. Barr makes clear in his Unity 3D Essential Training is that of pivot points. You can change the pivot point for a component easily from the actual pivot point of the object to the center by clicking on the pivot gizmo at the top of the Unity interface. And for rotation, you can position the pivot point to the global space of the scene or to the local space of the actual game object.
Unity has excellent documentation on pivot points and I was able to further my understanding by reading the section on these topics in the current manual.
Animation in Unity
Anything that’s a component in Unity, you can access and you can animate. That includes textures as well. In the 3D essentials course, I’m following, I learned how to take an animation clip attached to a game object (a hatch that opens and closes above a conveyor belt) and split it into 3 separate animation clips: Open, Hold and Close. You do this in the inspector under the tiling and offset section of the texture map (on the right side of the Unity interface).
This next part gets a little tricky and is specific to game engine animation. You have to create an Animation Controller and then add the animation clips to the Controller. Now, it’s a bit like working procedurally because you hook up the clips in the order you want them to fire. And, by adding transitions to the clips you adjust the rate of each animation. In essence, you are creating a kind of behavior for the game object. Once this is done you can now set the animation behavior to respond to physics or even another animation.
Initially, this was a hard concept for me to grasp. But once I began thinking of all of the games I’ve played it started to make sense. You are creating behaviors and interaction models for objects so that they will fire when you want them to. Remember pushing the lever and water fills up a hole in a game? Same concept.
There’s a lot more to animating in Unity, but the key concept takeaway for me is that any game component can be animated and you can control the animation using animation controllers.