Chapter 8: Intro to Lighting

Always think about where your lighting should come from. Is there sunlight? Is there a lamp nearby? Try to avoid just placing arbitrary lights around your level.

Lightening Techniques in U3:

  1. Light Point.
  2. Light Spot.
  3. Sky Light.

Light Maps and Shadow Maps

Calculating all Lighting and shadowing for your level constantly during gameplay would be crippling to performance. To handle that problem, Unreal generates light maps and shadow maps for all objects and lights that do not move during gameplay.

A light map is a texture generated from a calculation of all of the lights that are illuminating a particular surface or static mesh. The benefit of this is that multiple lights can strike a single surface, and because the actual calculation is done in advance and “baked” in the form of a light map, there is a minimal drop in performance.    

A shadow map is very similar to a light map, but stores shadowing data instead of illumination data. Shadow maps also bear a distinct difference from light maps in that while a single light map can define lighting from many different lights, each light must have its own individual shadow map for a given surface. If many lights strike a single surface, the multiple shadow maps are combined to provide the final result

On the left, a level with only light maps. On the right, the same level with light and shadow maps

Static and Dynamic Lighting

Static lights to be lights which do not move during gameplay, meaning that their bMovable property is set to false. Dynamic lights will be considered as lights which can move, or that have their bMovable property set to true.

 

Per-Vertex Lighting and Lighting Subdivisions

This concept pertains only to the lighting of static meshes.

Per-vertex lighting is basically means that a triangle would only appear lit if one or more of its vertices was receiving any light.

As of Unreal Engine 3.0, lighting is calculated across static meshes using lighting subdivisions. This means that each polygon is internally subdivided into multiple parts, and the light striking each part is calculated. The final result is blended together, resulting in the final lighting for the polygon. Naturally, this calculation is more complex, though the performance hit affects level build times much more than gameplay.

 

The true benefit of lighting subdivisions comes from being able to change the number of subdivisions used to calculate the lighting across each polygon. The higher your subdivisions, the clearer your shadows will be, though the more calculation will be needed to create your shadows.

Where per-vertex lighting only looks to see if vertices are illuminated, lighting subdivisions allow for multiple sample points across each polygon for a more accurate result.

 

Types of Lights

There are four primary types of lights are point lights, spot lights, directional lights, and skylights.

The first three types of lights (point, spotlight, and directional) each have two subtypes available. These are toggleable
lights and moveable lights. This means that you can have a normal point light, a toggleable point light, or a moveable point light.

Toggleable Lights

Toggleable are lights can be turned on and off during gameplay. This is done through the use of the Toggle sequence object within Kismet.

Moveable Lights

Moveable lights, as the name suggests, are dynamic lights that can move throughout your level. This could be anything from a point light that moves along a path to a spotlight that rotates to point at a target. This motion is handled with Matinee.

As with a toggleable light, the light from a moveable light cannot be used in a light map.

 

Point Lights

Point lights emit light from a single point in all directions, kind of like a light bulb. They are the most often used type of light, especially in indoor levels. The light has a Radius property that controls how far light travels from the location of the actor. Point lights are available in standard static form (PointLight), toggleable (PointLightToggleable), and moveable (PointLightMovable).

Icons for a PointLight, a PointLightToggleable, and a PointLightMovable.

 

Spot Lights

Spot lights emit light from a single point in a confined cone, like a real-world spotlight or flashlight. This cone can be opened and closed to focus the light via the InnerConeAngle and OuterConeAngle properties.

Icons for a SpotLight, a SpotLightToggleable, and a SpotLightMovable

 

Directional Lights

Directional lights are used when simulating an extremely distant light source, such as the Sun, and are usually only used in outdoor situations.

DirectionalLight actors only come is standard form (DirectonalLight) and toggleable form (DirectionalLight_Togglable)

On the left are shadows created with a point light, and on the right with a directional light. Notice the parallel nature of the directional light’s shadows.

 

SkyLight

A SkyLight actor emits light from two theoretical hemispheres, upper and lower. These hemispheres are calculated as if they were infinite in size, meaning that there is no way to move an actor “outside” the hemisphere. The purpose of the SkyLight is to provide ambient lighting to a level, thus preventing any shadows from falling into full blackness. Each of the two hemispheres has its own color and brightness values, meaning that you can have brighter light coming from above with dimmer coming from below, simulating the effect of diffused light coming from many surfaces at once. SkyLights do not have a toggleable or movable form.

Light Volumes are another method of determining what objects a light affects, aside from Lighting Channels.

 

In Unreal Engine 3.0, there are three different types of shadows available: Precomputed shadows, Shadow Volume Shadows, and Shadow Buffer Shadows.

 

Precomputed Shadows (Shadow Maps)

Precomputed shadows are used when static lights are illuminating static geometry.

 

Shadow Buffer Shadows

Shadow buffer shadows are used only when a static light illuminates dynamic geometry, such as characters, movers, etc.

 

Shadow Volumes

Shadow volumes are used only when a dynamic light illuminates dynamic geometry or static geometry.

 

Shadow volumes do have a serious performance drain when illuminating objects with open edges (i.e. holes in the mesh). In such instances, a much slower double-sided shadow volume is used by the engine to calculate shadows at a marked performance loss. To prevent this, make sure that your meshes have no open edges, which is sometimes described as being “watertight”.

 

Light Environments are the method of choice for handling dynamic lighting from multiple sources with minimal overhead. In fact, all pawns in the game Gears of War were lit through the use of Light Environments!

 

The following are some simple performance-friendly guidelines to keep in mind when lighting your levels.

  • Keep the amount of dynamic shadows, either cast by dynamic lights or by static lights on dynamic geometry, to an absolute minimum.
  • Light Environments should be used whenever possible in place of dynamic lighting and shadowing.
  • If any light has no potential to cast shadows, then disable shadow casting on the light by setting its CastShadows property to False.
  • If any object has no potential cast shadows, disable shadow casting on the object by setting its CastShadows property to False.
  • Use subtle ambient or bounce lighting (using SkyLights or multiple point lights) and/or modulated shadows to avoid super-black shadows.
  • All lights, other than ambient or bounce lighting, should have a visual source, such as a static mesh, in the world. Always ask yourself, “Where is the light coming from?”
  • When necessary, add lights for emissive surfaces, since they only give the appearance of emitting light, but do not actually emit light themselves.
  • Remember that shadow maps require a separate map for every light affecting each individual object is stored. This means that too many lights casting shadows on many static objects can result in memory problems!
  • When using shadow maps, make sure to only use a resolution high enough to give the necessary detail.
  • Enable UseLightMap whenever possible, such as when static lights are affecting static objects, to force lighting to be baked into Lightmaps.
  • Avoid Light Functions when possible as they cannot be stored in Lightmaps, even when they are static. However, at the time of this writing there is potential that the ability for static Light Function lights being calculated in the Lightmap will be added in a future release of the engine.
  • Keep shadow buffer casting meshes out of immediate proximity of point lights.

Chapter 6: Intro to Materials

In the tutorial I’ve used:

  1. Unreal Editor.
  2. Material Editor.

With Unreal Engine 3.0, you are given the power to create worlds of stunning visual beauty and clarity. Materials are the single most important aspect to the final look of your game. Need a character that is so visually detailed that you could count the pores on her face? You can’t do it without a material. Need realistic looking fire for a particle system? You’ll need a flame-like material. Want to create a complex interactive machine with number readouts, lighted buttons, knobs, and switches? You’ll have to have a material to do that, too. No matter if you’re making large-scale environments, interactive objects, playable characters, or just simple props to spice up your level, a material must be used to control the object’s final look.

The very simplest way to think of a material in terms of a game engine is as a paint that is applied to surfaces of objects in your level.

Many newcomers to game design can have a difficult time differentiating between materials and textures. Simply put, the difference is that a texture is merely an image, while a material is a culmination of a variety many different elements, including textures. It is easiest to think of a texture as being a component of a material. When you create materials you will use textures to provide color, transparency, glow, and a variety of other effects for your material.

Part of the confusion in knowing the difference between materials and textures is that in versions of UnrealEd prior to the release of Unreal Engine 3, textures or materials could be applied directly to surfaces. This is no longer true. Only materials may be applied to surfaces. Textures are attached to the final material, which is then applied to the given surface.

Wrapping a static mesh with texture is done through texture coordinates (UVs).

Starting at the bottom left corner, the texture being applied to the surface is mapped from 0 to 1 horizontally and from 0 to 1 vertically. These are the U and V coordinates respectively, sometimes referred to as ‘tangent space’. Each vertex of a surface has values that correspond to the U and V coordinates of the texture that is to be applied to the surface. The texture is then painted onto the surface according to those texture coordinates

Figure – Here you can see how the UVs of a polygonal object affect how the texture is applied across the object’s surface

Once you start really getting into creating your own materials, you’re going to come across the word ‘instructions’. In this case, instructions are commands that are passed to the computer to handle various aspects of the material’s behavior. In fact, even when you create a basic material for the first time, you will notice that the Material Editor’s Expression Window tells you that the material contains many instructions by default, most of which are used to simulate the lighting on the surface of your material.

For the most part, your materials are useless without some sort of light in place to allow you to see them. Also, when you really get down to the bare bones of it, materials are all about controlling how a given surface responds to light.

In general, a material is comprised of three primary components that must all work in unison to create the final result. These components are Material Nodes, Material Channels, and Material Expressions. The three components interact in the manner described in the following diagram.

A normal map is essentially a texture that describes differing elevations across a surface, as well as the angles of each part of the surface as the elevation changes.

Chapter 5: Static Meshes

In this tutorial I’ve used:

  1. UnrealEd.
  2. 3D Max.
  3. Mesh Editor

The creation of static meshes can take place in a number of ways. For our purposes, we’re going to narrow it down into two primary workflows. Both of these workflows can be described in the flow chart below:


The first workflow involves the creation of static meshes in the form of a high-resolution, or high polygon mesh, which will be later used to produce a normal map. This technique allows the modeler to create tremendous detail at the physical level. Once this high-resolution mesh is completed, a lower-resolution model is created. This low resolution mesh is what will actually be displayed in-game, but using the normals generated from the high-rez version.

The second workflow requires only a low-resolution mesh. Instead of modeling a high-rez mesh to create normals, a normal map is generated inside Photoshop from a grayscale bump map using a free plug-in from nVidia. Free plug-ins to convert height maps into normal maps also exist for PaintShop Pro and GIMP, and can be found quickly with an Internet search. If your static mesh is fairly simple, you may find that this method can save you considerable time over having to create high-resolution geometry for every single mesh in your level.

You must keep in mind, however, the great number of possibilities that materials allow. Do you want some parts of the object to glow? Are some areas shinier than others? Does the texture animate in some way? You will need to answer these questions and more before you can determine exactly what it is your material, and therefore your textures, need to accomplish. Hopefully, you will already have a good idea of the final look before you even begin modeling. Even so, a good plan is to have some sketches or notes on hand during texture creation to help keep you on track.

The Static Mesh Editor is the primary tool for setting up a static mesh for use in your levels. With it, you can change a static mesh’s global material, set up its collisions, and adjust its properties. Its interface is fairly simple, providing you with a preview of the mesh itself, as well as few tools and list of properties.

The most important use of the Static Mesh Editor is to provide you the final setup you need before your new static mesh is ready for placement in your level. In the next tutorial, we will take a look at how we can use it to adjust our preview snapshot and to get our new material applied.

To create this collision you will need to establish a collision model for your object. A collision model is a simple polygonal mesh which is invisible to the player, and acts sort of like a “force field” to prevent actors from passing through the static mesh. In most cases, it will surround the outside of the static mesh, though will be very close to the surface. Fortunately, UnrealEd makes the process of creating these collision objects extremely simple. However, it will help if you know a few things about the types of collisions you can create, so that you can choose the proper one for your model.

DOP stands for discrete oriented polytope, and refers to how many planes will be used to create the collision object.

All collision meshes must be convex.

Chapter 4; A Universe of Brushes World Geometry In-Depth

In this tutorial I’ve used:

  1. UnrealEd.

BSP brushes provide the means to create the base layout for your levels. BSP brushes can also be used to “prototype” a level, allowing you a fast way to generate a preview of what the level will look like without having to wait for assets to be created.

There are two important acronyms you must remember when working with world geometry: BSP and CSG. BSP stands for Binary Space Partition, a name which derives from the calculations used to tell the game’s engine the proper order to render each polygon. However, the intricacies of how BSP works internally are more than most level designers will really need. Because of this, we will simply show you the available tools, and give you some insight on how they can be used during level construction.

CSG stands for Constructive Solid Geometry, and is simply another word for world geometry. When working with UnrealEd, it simply refers to the geometry that is created from your BSP brushes. The general workflow is that the user creates the BSP brushes, and then UnrealEd uses those brushes to create the level’s CSG, or world geometry.

All BSP creation revolves around the construction and placement of brushes. In UnrealEd, a brush is simply a three-dimensional object used to designate a certain area of space. There are three different types of brush that you will use most often during level creation:

  1. The Red Builder Brush
  2. Additive brushes, and
  3. Subtractive brushes.

If you need to hide the Red Builder Brush while working in UnrealEd, you may simply enter Game Mode by pressing the G key.

Additive brushes allow you to add mass into your level. They are created from the Red Builder Brush by clicking the CSG: Add button in the Toolbox. Additive brushes appear in your viewports as blue wireframes. Additive brushes will be created in the precise shape of the Red Builder Brush.

Subtractive brushes are used to remove mass from your level, similar to carving out the current shape of the Red Builder Brush. You can create them by clicking the CSG: Subtract button in the Toolbox. Subtractive brushes appear as yellow wireframes in your level

Additive and Subtractive Levels

Whether you primarily use additive or subtractive brushes in your level will usually depend on the type of level you create. As of the release of Unreal Engine 3.0, users can create levels in an additive or subtractive manner, rather than being limited only to subtractive levels as they were in previous generations of the engine. The difference is quite simple; an additive level can be thought of as a massive area of open air, into which you will additively create a level. A subtractive level, on the other hand, is a massive area of solid mass, much like being inside a mountain. From this mass, you will carve out your level. Neither approach is technically “better” or more efficient than the other; the decision of which to use will typically be a matter of personal preference.

Moving with Pivots

Brush pivots provide center of movement for your brushes. When you’re snapping to the grid, this pivot determines the point at which the brush snaps. The pivot is also used as a point of rotation. For example, say you have a cube-shaped brush with its pivot precisely at the center of the cube. If you rotate the brush, you would see the cube appear to spin in place. However, if you relocated the pivot to the corner of the brush, you would see the cube rotate about its corner. You can change the pivot of a BSP brush by right-clicking on one of its vertices, or by right-clicking anywhere on the screen and using the options under Pivot.

Brush Order

Brush order is another important, if not so often used, aspect to BSP creation. Say you’ve constructed a level of a multistory building, all of which is contained within a large cube-shaped subtraction. Perhaps you later decide that placing the building inside a cylindrical subtraction would make more sense, and so you delete the subtractive cube and replace it with a cylinder. However, when you build your geometry, you find that the entire building has disappeared! This happens because the brushes were created in the wrong order. Think about it: If the last operation you perform is a large subtraction, then all of the additive brushes within that subtraction would be removed.

The Volumetric primitive is actually just a specified number of vertical sheets rotated about the Z-axis to give the effect of having volume. This can be used for creating effects such as fire, smoke, plasma, chains, or trees, where exact three-dimensional detail is not necessary or would become a hindrance to performance.

Brush solidity plays an important role in how your level’s world geometry will be created. If your brushes are too complex, you can end up with many divisions in your world geometry, which can harm performance. In general, it’s best to remember that BSP brushes should be used for prototyping your levels and to create the general volume, not for any sort of decoration. Save your decoration for static meshes!

Brush Types:

  1. Solid Brush: Solid brushes are by far the most common. In fact, every brush you have created so far has been a solid brush.
    1. Solid brushes have the following main properties:
      1. Solid brushes block players and projectiles in the game. This means you can’t run through them or shoot through them.
      2. Solid brushes can be additive or subtractive.
      3. Solid brushes create BSP cuts in their surrounding world geometry.
  2. Semi-Solid: Semi-solid brushes can be placed in a level without adding any extra BSP cuts to the surrounding world geometry. This can be beneficial when using brushes to create things such as pillars and beams, but you should note that such objects are typically reserved for static meshes.
    1. The following is a list of key attributes for semi-solid brushes:
      1. Semi-Solid brushes block players and projectiles, just as Solid brushes do.
      2. Semi-Solids can only be additive, never subtractive.
      3. Semi-Solids don’t leave BSP cuts in their surrounding world geometry.
  3. Non-Solid: Non-Solid brushes behave similarly to a hologram. They have no collision capabilities, and are therefore of fairly limited use.
    1. The following is a list of their properties:
      1. Non-Solid brushes do not block players or projectiles.
      2. Non-Solids can only be additive, never subtractive.
      3. Non-Solids do not leave BSP cuts in their surrounding geometry.

Brushes in Unreal are composed of polygons, which are shapes with many sides. When you are working in 3D, a polygon is a surface comprised of vertices, edges, and at least one face.

Soft selection uses a specified spherical radius from the selected vertex to determine all affected vertices’ selection weights. The weight falls off from 1 at the selected vertex to 0 at the specified radius. All vertices at or beyond the radius are essentially unaffected. Any transformation will be applied based on the weight of all affected vertices. The selected vertex will receive the full transformation, while a vertex halfway along the radius from the selected vertex will receive only half the transformation and a vertex at or beyond the radius will not receive any transformation at all.


Chapter 3: Up and Running A Hands-On Level Creation Primer

In this tutorial I’ve used:

  1. UnrealCascade.
  2. UnrealEd.
  3. UnrealKismet.

The Lightmap Resolution property may seem counterintuitive, as the lower you place the number, the sharper the shadows get. Think of the number as number of Unreal units used to calculate a single point of shadow. Hence lower values will tighten up the shadows. At the same time, having smaller shadow points requires more points to be placed to create a contiguous shadow. This means that lower Lightmap Resolution settings will require more processing power and could slow down your level. Be careful about doing this to too many surfaces!

Particles are points in space that can be used to create a variety of effects from steam to smoke to fire to dust and even debris! A particle setup in Unreal can be very simple, or can grow to become quite complex.

Under the Lighting tab, seting bCastDynamicShadow to false (unchecked) prevents the door from casting a shadow that is visible through the wall while it is opening.

To put it simply, Kismet is a visual scripting system that allows you to create complex events by simply connecting a series of nodes together into a network. As your sequences get more involved, you will find yourself creating more and more complex networks of nodes.

Below are output images from the level I’ve designed:

Chapter 2: Overview of Game Development

So what is iteration, in terms of game development? Simply put, it’s the process of coming up with and refining an idea, developing a game (or a gaming element) based on that idea, and then critiquing it within your team to decide whether the idea works or how it could be better.

So how does one come up with the great ideas that make games fun? Simple: Play games.

First impressions can be very important. Get your idea fully fleshed out (or as close to it as possible) before you try to present it. If you want to get some input or feedback, that’s fine, but don’t present the full idea until its ready to be seen.

AAA, B, Mod Games

AAA titles are games with really big budget, high quality, large team and broad marketing games which usually contribute to high sales as well. While a game mod is just a minor modification to an existing game like changing the media or doing a gameplay script changes and usually this is done by strong editor which accompany the game like UT3 or LittleBigPlanet.

There is also B titles which are not small titles but still is not a AAA ones in terms of budgeting and marketing and that’s the type which startups target till they prove to publishers they worth the money they will take to make a AAA title.

It’s not the artists job to know what is and isn’t possible through code, just as it’s not the programmer’s job to make the models look realistic.

In larger productions, this is one area that is commonly outsourced. Basically, you and your team will play through the game, or perhaps just play through specific parts so that you can focus on just one feature, and then reconvene in a meeting discuss what everyone finds. This is just another part of the iterative process. Realize that at the outset, there won’t be much to play. It will take some time to come up with a playable shell, and the members of the team will have to be understanding while certain areas in the game are only ‘rough visualizations’ of effects or assets to come. This type of testing is commonly called alpha testing.

Closed beta testing is about testing the game in house where open beta testing is about letting gamers in public testing it.

Chapter 1: Introduction to Unreal Technology

What is the Unreal Engine, exactly? In short, it is simply a system that combines a series of user-created assets into a visually stunning interactive environment.

Before getting into detail watch this video presented at GDC ’08 about Unreal Engine

Unreal Engine Components:

  1. Graphics Engine:
    1. New to Unreal Engine 3 has the concept of occlusion that will prevent rearmost objects from rendering. To illustrate, consider standing in a room, with another room just beyond the wall you’re facing. In previous versions of the Unreal Engine, the room on the other side of the wall would render by default, and could only be hidden through various level optimization techniques. With Unreal Engine 3, an occlusion calculation is processed, and any objects that are hidden from view, such as the room beyond the wall, are no longer rendered. This saves the level designer from having to worry about intricate level optimization procedures.
    2. Unreal Engine 3 supports Level Streaming technology, which allows different levels to be loaded unloaded from memory during gameplay. This means that vast areas can be created and seamlessly integrated in such a way that as you enter one section of the level, the area you’re exiting is dumped from memory and is no longer rendered by the graphics engine.
    3. Unreal Engine 3 uses normal mapping technology, in which the surface normals of multi-polygon objects are applied to lower-resolution geometry, creating the illusion that a simple in-game model is actually comprised of millions of polygons. The graphics engine handles how these normal maps appear on each object.
    4. In this video you’ll see a comparison between Unreal Engine 3 and CryEngine 2 Graphics.
  2. Sound Engine.
  3. Physics Engine.
    1. In these videos, you’ll see some of U3 physic power:
      1. Rigid Bodies.
      2. 50,000 barrels mass physics.
      3. Cloth physics.
      4. Joints.
  4. Input Manager.
  5. Network Infrastructure.
    1. If a single computer is running strictly as a server with no client running at the same time, it is said to be a ‘dedicated server’. Dedicated servers are generally preferred for online gaming, as the server can do its job without the added overhead of a client.
  6. UnrealScript Interpreter.

Engine Interaction

The Unreal Engine also uses a game loop, but it is an event-driven system. This means that the engine contains a list of events that the various components of the engine will need to address. These events are created by a variety of different sources, such as player inputs, data from the physics system, or when one component tries to communicate with another. Everything is passed through the system via this event list.

The benefit to this system is that each of these events is given a specific priority. This means that rather than every single aspect of the game being updated during every game cycle in a specific order, the events are processed based on importance.

The key is that the most important events are processed before the non-critical ones.

At startup the engine core initializes the graphics engine, the sound engine, the physics engine, and the UnrealScript interpreter.

Game Assets

Virtually all of the data that Unreal uses is stored in a series of packages. You can think of a package as a collection of assets, such as textures, static meshes, animation data, and other key elements you will need for your game or project. Packages are often separated by the kind of assets they contain, though this is certainly not always the case. You could, for example have a package called ComputerMeshes that held nothing more than static meshes for a computer scene. However, you could also have the textures for those meshes in the package as well if you like. In the end, just think of a package as a container used to load assets into the game. All assets used in gameplay – excluding maps – are contained within packages.

Maps

Technically speaking, a map is simply a collection of various game assets that are shown to the player, many of these assets coming from several different packages. Artistically, the map is where you set the stage for your game or visualization experience.

As a matter of convenience, maps have their own package contained within them. This package is contained entirely within the level. This is nice when you have assets that need to only exist within the level and nowhere else. In most cases, though, you’ll store your assets into other packages than your level package. This is partly because if the ability to access those assets in other levels, which you cannot do for assets saved in a map’s package. Also, it’s important to keep in mind that any assets that are in the package but are not referenced in the level, they will be culled from the level’s package the next time the level is saved. For example, if you had a texture of a brick wall in the level’s package, but never actually used it in any of the materials in the level, the next time you saved that brick wall texture would be dumped from the package.

Textures and Materials

Textures are simply 2D images that are used to create materials in your levels and on other game assets.

Some textures, such as normal maps, give the illusion that the surface has much more physical detail than is actually allowed by the number of polygons. Through the use of normal maps, models that are only comprised of a few thousand polygons can appear to be constructed from millions of polys!

Static Mesh

Static meshes are optimized polygonal models which you use to decorate the vast majority of your level. In fact, most modern Unreal Engine 3 games are so completely covered in static meshes that you cannot even see the BSP geometry of the level.

The key to the static mesh is that it is designed to be instanced (or reproduced multiple times) completely within your video card. This allows for a very fast draw time and fast performance during gameplay, even in maps that are using thousands of static meshes. However, to gain the most benefit from static meshes, it’s a good idea to reuse few meshes over and over. This is because only the first instance of a static mesh is really using a significant portion of memory. All other copies of that same mesh are having their positional, rotational, and scale information stored in the video card, meaning you could have thousands of copies with relatively little strain on your computer, provided you have a relatively recent video card. However, if you have thousands of different static meshes, each mesh will need its first instance stored in memory, which drains performance much more quickly than if you let your video card do what it was built to do.

This doesn’t mean that you can’t use many different meshes in your levels. Far from it. Just realize that overdoing it can cause a drop in performance. How much is too much? The answer is a bit relative, and is based on:

  1. The specs of your computer
  2. How many dynamic lights are striking those meshes
  3. How many materials are on those meshes
  4. How many polygons each mesh has, and a myriad of other factors.

However, using UnrealEd’s Primitive Stats browser, you can sort all of the assets in your level by how much memory they’re taking up. If you notice that your level is playing a tad slowly, you can open this spreadsheet-style window and sort it by Resource Size, which will tell you how much memory each asset is taking up, with the biggest resource hogs at the top. If performance is low and static meshes are at the top of the Primitive Stats list in terms of memory usage, then that’s a pretty sound indication that it’s time to throttle back a bit on how many static meshes you’re using, how complex your meshes are (meaning the number of polygons and materials they’re using), and other factors as well

Animation and Skeleton Mesh

Characters in a game are not (typically) static meshes. They are usually what is referred to as a skeletal mesh, which is a mesh that deforms based on a digital skeleton. These meshes are created and their skeleton is added – through a process called ‘rigging’ – within external 3D applications such as 3ds Max and Maya. These meshes are then animated to make them look like they are running, jumping, shooting, doing pelvic thrusts, or any other sequence they happen to need for game interaction.

These meshes are stored in packages, as are their animations. In fact, the animations are actually stored as a separate type of asset, which is extremely useful in that characters with similar skeletal structures can use each others’ animation sequences. This not only simplifies the process of animating the characters, but also frees up precious game resources by only requiring that a lower number of animation sequences are actually loaded into the game.

The Tools of the Trade

Now you have a general idea of the kinds of things you’re going to need to create your Unreal worlds. The final question is what are you going to need to actually create them? This section will introduce you to a few of the tools you can use to create game assets. As you have seen, many of the assets you’ll need for your project must be created in external software applications.

UnrealEd

If you’re making levels for Unreal, you’re going to need UnrealEd sooner or later. UnrealEd is the central application you will use to create your maps, to populate them with meshes, create your materials, construct your sound effects from sound recordings, and much more. The key thing to remember about UnrealEd is that you can’t really use it to create 3D content, such as static and skeletal meshes, or content such as textures and sound files.

3D Applications

Since Unreal is a 3D gaming engine, you’re going to need three-dimensional assets at some point. This could be characters or other skeletal meshes, as well as static meshes. There are a great number of 3D applications on the market today that are fully capable of creating content for Unreal Engine 3. Two key applications that are frequently used in the industry are 3ds Max and Maya, both developed and distributed by Autodesk. You can find out more about these programs, as well as download evaluation copies at www.autodesk.com.

Texturing Programs

In order to create the textures that you will need for your materials in the game, you will need to have some sort of application designed specifically for the task. Common programs used include Adobe Photoshop or Corel Painter. Both of these have a wide range of tools for working with photographic reference or textures, as well as painting your own.

Collada

Since there are so many 3D applications available to users these days, it makes sense that there be one single format that can be used to send information into Unreal Engine 3, thereby allowing users of any application to freely make Unreal game assets without having to switch to different software. Unreal Engine 3 supports the use of the Collada file format, which has plug-in versions for virtually every single major 3D application on the market. For more information about Collada, as well as download the latest version of the plug-in at collada.org.

Sound Programs

You can’t record audio in UnrealEd, and so you’ll need some other software to capture sound effects and turn them into WAV files, which can then be added into UnrealEd SoundCue Editor and thence turned into a SoundCue asset.

However, if you need a more professional environment to create your sound effects or music, you could consider Sony’s Sound Forge or Adobe Audition as possible software applications to create your effects. Whatever program you choose, make sure that it allows you to edit your waveform so that you can remove any slight pauses at the end of the effect, allowing for easy blending between different effects. Keep in mind, though, that you can edit modulation and volume with UnrealEd’s SoundCue Editor.

Training Sources

Many of the applications you need in order to create assets are very technical, and are not the kind of thing that you can just pick up and immediately start using. As such, many aspiring game artists will require some sort of training to help them along the way. There are many locations where one can find 3D application and texturing training. One such location is 3D Buzz.com, which has hundreds of hours of free training videos available for free download, as well as professional-level training videos available for purchase

Reference: Mastering Unreal Technology: Level Design.