Chapter 3: Up and Running A Hands-On Level Creation Primer

In this tutorial I’ve used:

  1. UnrealCascade.
  2. UnrealEd.
  3. UnrealKismet.

The Lightmap Resolution property may seem counterintuitive, as the lower you place the number, the sharper the shadows get. Think of the number as number of Unreal units used to calculate a single point of shadow. Hence lower values will tighten up the shadows. At the same time, having smaller shadow points requires more points to be placed to create a contiguous shadow. This means that lower Lightmap Resolution settings will require more processing power and could slow down your level. Be careful about doing this to too many surfaces!

Particles are points in space that can be used to create a variety of effects from steam to smoke to fire to dust and even debris! A particle setup in Unreal can be very simple, or can grow to become quite complex.

Under the Lighting tab, seting bCastDynamicShadow to false (unchecked) prevents the door from casting a shadow that is visible through the wall while it is opening.

To put it simply, Kismet is a visual scripting system that allows you to create complex events by simply connecting a series of nodes together into a network. As your sequences get more involved, you will find yourself creating more and more complex networks of nodes.

Below are output images from the level I’ve designed:

Chapter 2: Overview of Game Development

So what is iteration, in terms of game development? Simply put, it’s the process of coming up with and refining an idea, developing a game (or a gaming element) based on that idea, and then critiquing it within your team to decide whether the idea works or how it could be better.

So how does one come up with the great ideas that make games fun? Simple: Play games.

First impressions can be very important. Get your idea fully fleshed out (or as close to it as possible) before you try to present it. If you want to get some input or feedback, that’s fine, but don’t present the full idea until its ready to be seen.

AAA, B, Mod Games

AAA titles are games with really big budget, high quality, large team and broad marketing games which usually contribute to high sales as well. While a game mod is just a minor modification to an existing game like changing the media or doing a gameplay script changes and usually this is done by strong editor which accompany the game like UT3 or LittleBigPlanet.

There is also B titles which are not small titles but still is not a AAA ones in terms of budgeting and marketing and that’s the type which startups target till they prove to publishers they worth the money they will take to make a AAA title.

It’s not the artists job to know what is and isn’t possible through code, just as it’s not the programmer’s job to make the models look realistic.

In larger productions, this is one area that is commonly outsourced. Basically, you and your team will play through the game, or perhaps just play through specific parts so that you can focus on just one feature, and then reconvene in a meeting discuss what everyone finds. This is just another part of the iterative process. Realize that at the outset, there won’t be much to play. It will take some time to come up with a playable shell, and the members of the team will have to be understanding while certain areas in the game are only ‘rough visualizations’ of effects or assets to come. This type of testing is commonly called alpha testing.

Closed beta testing is about testing the game in house where open beta testing is about letting gamers in public testing it.

Chapter 1: Introduction to Unreal Technology

What is the Unreal Engine, exactly? In short, it is simply a system that combines a series of user-created assets into a visually stunning interactive environment.

Before getting into detail watch this video presented at GDC ’08 about Unreal Engine

Unreal Engine Components:

  1. Graphics Engine:
    1. New to Unreal Engine 3 has the concept of occlusion that will prevent rearmost objects from rendering. To illustrate, consider standing in a room, with another room just beyond the wall you’re facing. In previous versions of the Unreal Engine, the room on the other side of the wall would render by default, and could only be hidden through various level optimization techniques. With Unreal Engine 3, an occlusion calculation is processed, and any objects that are hidden from view, such as the room beyond the wall, are no longer rendered. This saves the level designer from having to worry about intricate level optimization procedures.
    2. Unreal Engine 3 supports Level Streaming technology, which allows different levels to be loaded unloaded from memory during gameplay. This means that vast areas can be created and seamlessly integrated in such a way that as you enter one section of the level, the area you’re exiting is dumped from memory and is no longer rendered by the graphics engine.
    3. Unreal Engine 3 uses normal mapping technology, in which the surface normals of multi-polygon objects are applied to lower-resolution geometry, creating the illusion that a simple in-game model is actually comprised of millions of polygons. The graphics engine handles how these normal maps appear on each object.
    4. In this video you’ll see a comparison between Unreal Engine 3 and CryEngine 2 Graphics.
  2. Sound Engine.
  3. Physics Engine.
    1. In these videos, you’ll see some of U3 physic power:
      1. Rigid Bodies.
      2. 50,000 barrels mass physics.
      3. Cloth physics.
      4. Joints.
  4. Input Manager.
  5. Network Infrastructure.
    1. If a single computer is running strictly as a server with no client running at the same time, it is said to be a ‘dedicated server’. Dedicated servers are generally preferred for online gaming, as the server can do its job without the added overhead of a client.
  6. UnrealScript Interpreter.

Engine Interaction

The Unreal Engine also uses a game loop, but it is an event-driven system. This means that the engine contains a list of events that the various components of the engine will need to address. These events are created by a variety of different sources, such as player inputs, data from the physics system, or when one component tries to communicate with another. Everything is passed through the system via this event list.

The benefit to this system is that each of these events is given a specific priority. This means that rather than every single aspect of the game being updated during every game cycle in a specific order, the events are processed based on importance.

The key is that the most important events are processed before the non-critical ones.

At startup the engine core initializes the graphics engine, the sound engine, the physics engine, and the UnrealScript interpreter.

Game Assets

Virtually all of the data that Unreal uses is stored in a series of packages. You can think of a package as a collection of assets, such as textures, static meshes, animation data, and other key elements you will need for your game or project. Packages are often separated by the kind of assets they contain, though this is certainly not always the case. You could, for example have a package called ComputerMeshes that held nothing more than static meshes for a computer scene. However, you could also have the textures for those meshes in the package as well if you like. In the end, just think of a package as a container used to load assets into the game. All assets used in gameplay – excluding maps – are contained within packages.

Maps

Technically speaking, a map is simply a collection of various game assets that are shown to the player, many of these assets coming from several different packages. Artistically, the map is where you set the stage for your game or visualization experience.

As a matter of convenience, maps have their own package contained within them. This package is contained entirely within the level. This is nice when you have assets that need to only exist within the level and nowhere else. In most cases, though, you’ll store your assets into other packages than your level package. This is partly because if the ability to access those assets in other levels, which you cannot do for assets saved in a map’s package. Also, it’s important to keep in mind that any assets that are in the package but are not referenced in the level, they will be culled from the level’s package the next time the level is saved. For example, if you had a texture of a brick wall in the level’s package, but never actually used it in any of the materials in the level, the next time you saved that brick wall texture would be dumped from the package.

Textures and Materials

Textures are simply 2D images that are used to create materials in your levels and on other game assets.

Some textures, such as normal maps, give the illusion that the surface has much more physical detail than is actually allowed by the number of polygons. Through the use of normal maps, models that are only comprised of a few thousand polygons can appear to be constructed from millions of polys!

Static Mesh

Static meshes are optimized polygonal models which you use to decorate the vast majority of your level. In fact, most modern Unreal Engine 3 games are so completely covered in static meshes that you cannot even see the BSP geometry of the level.

The key to the static mesh is that it is designed to be instanced (or reproduced multiple times) completely within your video card. This allows for a very fast draw time and fast performance during gameplay, even in maps that are using thousands of static meshes. However, to gain the most benefit from static meshes, it’s a good idea to reuse few meshes over and over. This is because only the first instance of a static mesh is really using a significant portion of memory. All other copies of that same mesh are having their positional, rotational, and scale information stored in the video card, meaning you could have thousands of copies with relatively little strain on your computer, provided you have a relatively recent video card. However, if you have thousands of different static meshes, each mesh will need its first instance stored in memory, which drains performance much more quickly than if you let your video card do what it was built to do.

This doesn’t mean that you can’t use many different meshes in your levels. Far from it. Just realize that overdoing it can cause a drop in performance. How much is too much? The answer is a bit relative, and is based on:

  1. The specs of your computer
  2. How many dynamic lights are striking those meshes
  3. How many materials are on those meshes
  4. How many polygons each mesh has, and a myriad of other factors.

However, using UnrealEd’s Primitive Stats browser, you can sort all of the assets in your level by how much memory they’re taking up. If you notice that your level is playing a tad slowly, you can open this spreadsheet-style window and sort it by Resource Size, which will tell you how much memory each asset is taking up, with the biggest resource hogs at the top. If performance is low and static meshes are at the top of the Primitive Stats list in terms of memory usage, then that’s a pretty sound indication that it’s time to throttle back a bit on how many static meshes you’re using, how complex your meshes are (meaning the number of polygons and materials they’re using), and other factors as well

Animation and Skeleton Mesh

Characters in a game are not (typically) static meshes. They are usually what is referred to as a skeletal mesh, which is a mesh that deforms based on a digital skeleton. These meshes are created and their skeleton is added – through a process called ‘rigging’ – within external 3D applications such as 3ds Max and Maya. These meshes are then animated to make them look like they are running, jumping, shooting, doing pelvic thrusts, or any other sequence they happen to need for game interaction.

These meshes are stored in packages, as are their animations. In fact, the animations are actually stored as a separate type of asset, which is extremely useful in that characters with similar skeletal structures can use each others’ animation sequences. This not only simplifies the process of animating the characters, but also frees up precious game resources by only requiring that a lower number of animation sequences are actually loaded into the game.

The Tools of the Trade

Now you have a general idea of the kinds of things you’re going to need to create your Unreal worlds. The final question is what are you going to need to actually create them? This section will introduce you to a few of the tools you can use to create game assets. As you have seen, many of the assets you’ll need for your project must be created in external software applications.

UnrealEd

If you’re making levels for Unreal, you’re going to need UnrealEd sooner or later. UnrealEd is the central application you will use to create your maps, to populate them with meshes, create your materials, construct your sound effects from sound recordings, and much more. The key thing to remember about UnrealEd is that you can’t really use it to create 3D content, such as static and skeletal meshes, or content such as textures and sound files.

3D Applications

Since Unreal is a 3D gaming engine, you’re going to need three-dimensional assets at some point. This could be characters or other skeletal meshes, as well as static meshes. There are a great number of 3D applications on the market today that are fully capable of creating content for Unreal Engine 3. Two key applications that are frequently used in the industry are 3ds Max and Maya, both developed and distributed by Autodesk. You can find out more about these programs, as well as download evaluation copies at www.autodesk.com.

Texturing Programs

In order to create the textures that you will need for your materials in the game, you will need to have some sort of application designed specifically for the task. Common programs used include Adobe Photoshop or Corel Painter. Both of these have a wide range of tools for working with photographic reference or textures, as well as painting your own.

Collada

Since there are so many 3D applications available to users these days, it makes sense that there be one single format that can be used to send information into Unreal Engine 3, thereby allowing users of any application to freely make Unreal game assets without having to switch to different software. Unreal Engine 3 supports the use of the Collada file format, which has plug-in versions for virtually every single major 3D application on the market. For more information about Collada, as well as download the latest version of the plug-in at collada.org.

Sound Programs

You can’t record audio in UnrealEd, and so you’ll need some other software to capture sound effects and turn them into WAV files, which can then be added into UnrealEd SoundCue Editor and thence turned into a SoundCue asset.

However, if you need a more professional environment to create your sound effects or music, you could consider Sony’s Sound Forge or Adobe Audition as possible software applications to create your effects. Whatever program you choose, make sure that it allows you to edit your waveform so that you can remove any slight pauses at the end of the effect, allowing for easy blending between different effects. Keep in mind, though, that you can edit modulation and volume with UnrealEd’s SoundCue Editor.

Training Sources

Many of the applications you need in order to create assets are very technical, and are not the kind of thing that you can just pick up and immediately start using. As such, many aspiring game artists will require some sort of training to help them along the way. There are many locations where one can find 3D application and texturing training. One such location is 3D Buzz.com, which has hundreds of hours of free training videos available for free download, as well as professional-level training videos available for purchase

Reference: Mastering Unreal Technology: Level Design.

Interview: Web Developer/Search Engine Engineer at Sakhr Software

In the last period I’ve made two interview with Sakhr Software one was as Web Developer and other as Search Engine Engineer.

Firstly, both interviews was great and I’ve get accepted as Search Engine Engineer.

In this post I’ll mention questions that were asked to me as Web Developer.

  • Web Questions:
  1. What’s the difference between state-full and stateless?
  2. How to make state-full?
  3. What’s XML?
  4. How web browsers render XML styled?
  5. What are differences between Server Side and Client Side based applications?
  • Object Oriented Programming/Design Patterns Questions:
  1. What’s the difference between Interface and Abstract Class?
  2. What are types of polymorphism?
  3. What’s function overloading?
  • SQL Questions:
  1. What’s a trigger?
  2. What’s the difference between INNER JOIN and OUTER JOIN?
  3. What’s better? Calling SQL procedure from C# or write it in C# layer?

As Search Engine Engineer they have discussed my projects, grades and C++ skills. The main question was as follows:

Assume that you have 2 list of names:

محمد على محمد

عبدالرحمن أحمد

ميرفت أمين

and

Abdelrahman Ahmed

Mohammed Ali Muhamad

Merfat Amin

Abd-El-Rahman Ahmad

Mervat Amyn

Mohamed Aly Mohamud

How to match names from Arabic List with English List?

They’ve requested from me developing a C++ application that resolves this problem. Find in this link the project.

NB: Make sure that you are connected to the internet while running the program because I’m accessing Google Translation API in it.

typedef Vs. #define

In this post I’ll try to summarize several posts I’ve read talking about differences between them

typedef:

  • Handled by the compiler itself.
  • An actual definition of a new type (some people said adding new alias).
  • typedef obeys scoping rules just like variables.
  • The type defined with a typedef is exactly like its counterpart as far as its type declaring power is concerned BUT it cannot be modified like its counterpart. For example, let’s say you define a synonim for the int type with:
typedef int MYINT
//Now you can declare an int variable either with
int a;
//or
MYINT a;
//But you cannot declare an unsigned int (using the unsigned modifier) with
unsigned MYINT a;
//although
unsigned int a;
//would be perfectly acceptable.
  • typedefs can correctly encode pointer types.
  • Some things can be done with typedef that cannot be done with define. Examples:
typedef int* int_p1;
int_p1 a, b, c;  // a, b, and c are all int pointers.

#define int_p2 int*
int_p2 a, b, c;  // only the first is a pointer!
typedef int a10[10];
a10 a, b, c; // create three 10-int arrays
typedef int (*func_p) (int);
func_p fp // func_p is a pointer to a function that
          // takes an int and returns an int

#define:

  • Handled by the preprocessor (a program run before actual compiler).
  • Works like replacing all in your editor.
  • #DEFINES are just replacements done by the preprocessor. For example:
  1. typedef char *String_t;
  2. #define String_d char *
  3. String_t s1, s2; String_d s3, s4;

s1, s2, and s3 are all declared as char *, but s4 is declared as a char, which is probably not the intention.

  • #define stays valid until the end of the file (or until a matching undef).
  • A #define is just a macro, i.e. it will be processed/expanded by the preprocessor.