Beginning Game Development: Part III - DirectX II
Introduction
Welcome to the third article on beginning game development. In this article we are going to cover some of the more advanced DirectX principles such as transforms, matrices, culling and clipping.
Before we start, I need to cover a couple of items that were brought to my attention via feedback from the readers (thank you everyone for taking the time to do this) and changes not directly related to the items covered in this article.
Code Cleanup
These changes have already been integrated into the code for this article.
- I updated to the June 2005 version DirectX SDK. I did have an issue with the DirectX assemblies being added to the Global Assembly Cache (GAC), so if you have any build errors, update the references to: C:\Program Files\Microsoft DirectX 9.0 SDK (June 2005)\Developer Runtime\x86\DirectX for Managed Code.
- In the last article you surrounded the IsWindowed setting for the PresentParameters with a
#if DEBUG
statement. To enable this option you must go to the Project settings and enable it:- Right-click the BattleTank2005 project and select Properties.
- Select the Build tab.
- In the general section, check the Define DEBUG constant option.
- Change the color in the device.Clear method call from DarkBlue to Black.
- Add the following code at the very end of the GameEngine constructor to make the form a more standard size.
Visual C#
// force the window to a standard size // the provides the correct aspect ratio of 1.33 this.Size = new Size ( 800, 600 );
Visual Basic
'force the window to a standard size ' the provides the correct aspect ratio of 1.33 Me.Size = New Size(800, 600)
- Change the Startup position of the form to CenterScreen.
- Right-click the GameEngine form and select View Designer.
- In Properties, change the StartPosition property to CenterScreen
Now we can actually draw something on the screen and start creating the game.
Drawing the targeting crosshairs.
In BattleTank 2005 we are sitting inside of a tank and looking through the targeting scope at the world around us. We use the targeting crosshairs to make it easier to hit and destroy the enemy. In real life the crosshairs are fixed in the optics of the cannon, so they are always visible and located in the same spot when looking through the targeting scope as a Heads-up Display (HUD).
In our game we have two choices when drawing the crosshairs.
- Draw them directly onto the center of the screen using screen coordinates.
- Draw them using world coordinates and ensure that the view point is located such that they are visible and centered on the screen.
The tradeoff here is between speed and extensibility. If we choose the first option we do not have to transform the coordinates, since they are already in screen coordinates, which is faster. In effect, we are removing the targeting crosshairs from the model space we are creating. The downside is that if we chose to change the game later on and allow the player to leave the tank, the crosshairs would also be visible when they walk around. The second option requires the coordinates to be adjusted and transformed, but provides us with the ultimate flexibility to change the game later on.
I am going to use option 2 because it is the most flexible and I am not going to waste time trying to optimize the game for speed before knowing that it is slow. When you write a game you will be faced with design choices like this and you have to understand the tradeoffs you are making.
Before we move on, let's define a model that will make it easier to describe and (we hope) understand the terms we are going to cover in this article.
A Model
Imagine an empty room. I determine where in the room the X, Y, Z origin is located. In this example let's say that the front left corner of the room is the origin in this world. This room becomes my World Space. (A real world space is infinite, but for now we can make do with the finite space of the room.)
World Space: An infinite three-dimensional Cartesian space. You place your objects anywhere you want in this world to create the environment you want.
Now I place a chair in the room at a predefined location. Each point on the chair can be accurately described using a set of Cartesian coordinates (X, Y, Z) and optional information about each point such as color (Color) and texture (Tu, Tv). If you think back to the last article you will notice that this makes each point a vertex. Since we have lots of separate vertices, we store all of them in a vertex array. When this array is loaded into memory it becomes a vertex buffer.
Vertex buffer
Vertex buffers are ideally suited for the complex transformations that DirectX needs to perform. DirectX provides a number of predefined vertex types representing the most common vertex formats. These types are defined as structures in the CustomVertexclass.
We are going to use the PositionColored vertex for the targeting crosshairs. This vertex provides X, Y, Z properties and a Color property, which is exactly what we need. This vertex also defines the coordinates' world space rather than screen space, which is what we decided to do.
Add the following methods to the GameEngine class after the OnPaint method.
Visual C#
private CustomVertex.PositionColored[] CreateCrossHairVertexArrayTop ( ) { CustomVertex.PositionColored[] crossHairs = new CustomVertex.PositionColored[7]; float zval = 0f; // top of targeting crosshairs crossHairs[0].Position = new Vector3 ( -1f, 1f, zval ); crossHairs[1].Position = new Vector3 ( -1f, 2f, zval ); crossHairs[2].Position = new Vector3 ( 0f, 2f, zval ); crossHairs[3].Position = new Vector3 ( 0f, 3f, zval ); crossHairs[4].Position = new Vector3 ( 0f, 2f, zval ); crossHairs[5].Position = new Vector3 ( 1f, 2f, zval ); crossHairs[6].Position = new Vector3 ( 1f, 1f, zval ); crossHairs[0].Color = Color.Green.ToArgb ( ); crossHairs[1].Color = Color.Green.ToArgb ( ); crossHairs[2].Color = Color.Green.ToArgb ( ); crossHairs[3].Color = Color.Green.ToArgb ( ); crossHairs[4].Color = Color.Green.ToArgb ( ); crossHairs[5].Color = Color.Green.ToArgb ( ); crossHairs[6].Color = Color.Green.ToArgb ( ); return crossHairs; } private CustomVertex.PositionColored[] CreateCrossHairVertexArrayBottom () { CustomVertex.PositionColored[] crossHairs = new CustomVertex.PositionColored[7]; // bottom of targeting crosshairs float zval = 0f; // bottom of targeting crosshairs crossHairs[0].Position = new Vector3 ( 1f, -1f, zval ); crossHairs[1].Position = new Vector3 ( 1f, -2f, zval ); crossHairs[2].Position = new Vector3 ( 0f, -2f, zval ); crossHairs[3].Position = new Vector3 ( 0f, -3f, zval ); crossHairs[4].Position = new Vector3 ( 0f, -2f, zval ); crossHairs[5].Position = new Vector3 ( -1f, -2f, zval ); crossHairs[6].Position = new Vector3 ( -1f, -1f, zval ); crossHairs[0].Color = Color.Green.ToArgb ( ); crossHairs[1].Color = Color.Green.ToArgb ( ); crossHairs[2].Color = Color.Green.ToArgb ( ); crossHairs[3].Color = Color.Green.ToArgb ( ); crossHairs[4].Color = Color.Green.ToArgb ( ); crossHairs[5].Color = Color.Green.ToArgb ( ); crossHairs[6].Color = Color.Green.ToArgb ( ); return crossHairs; }
Visual Basic
Private Function CreateCrossHairVertexArrayTop() As _ CustomVertex.PositionColored() Dim crossHairs(7) As CustomVertex.PositionColored Dim zval As Single = 0.0F ' top of targeting crosshairs crossHairs(0).Position = New Vector3(-1.0F, 1.0F, zval) crossHairs(1).Position = New Vector3(-1.0F, 2.0F, zval) crossHairs(2).Position = New Vector3(0.0F, 2.0F, zval) crossHairs(3).Position = New Vector3(0.0F, 3.0F, zval) crossHairs(4).Position = New Vector3(0.0F, 2.0F, zval) crossHairs(5).Position = New Vector3(1.0F, 2.0F, zval) crossHairs(6).Position = New Vector3(1.0F, 1.0F, zval) crossHairs(0).Color = Color.Green.ToArgb() crossHairs(1).Color = Color.Green.ToArgb() crossHairs(2).Color = Color.Green.ToArgb() crossHairs(3).Color = Color.Green.ToArgb() crossHairs(4).Color = Color.Green.ToArgb() crossHairs(5).Color = Color.Green.ToArgb() crossHairs(6).Color = Color.Green.ToArgb() Return crossHairs End Function Private Function CreateCrossHairVertexArrayBottom() As _ CustomVertex.PositionColored() Dim crossHairs(7) As CustomVertex.PositionColored ' bottom of targeting crosshairs Dim zval As Single = 0.0F ' bottom of targeting crosshairs crossHairs(0).Position = New Vector3(1.0F, -1.0F, zval) crossHairs(1).Position = New Vector3(1.0F, -2.0F, zval) crossHairs(2).Position = New Vector3(0.0F, -2.0F, zval) crossHairs(3).Position = New Vector3(0.0F, -3.0F, zval) crossHairs(4).Position = New Vector3(0.0F, -2.0F, zval) crossHairs(5).Position = New Vector3(-1.0F, -2.0F, zval) crossHairs(6).Position = New Vector3(-1.0F, -1.0F, zval) crossHairs(0).Color = Color.Green.ToArgb() crossHairs(1).Color = Color.Green.ToArgb() crossHairs(2).Color = Color.Green.ToArgb() crossHairs(3).Color = Color.Green.ToArgb() crossHairs(4).Color = Color.Green.ToArgb() crossHairs(5).Color = Color.Green.ToArgb() crossHairs(6).Color = Color.Green.ToArgb() Return crossHairs End Function
Remember that the coordinates for the Position property of each vertex are defined in world space coordinates. We will transform them to screen coordinates in a while. Also note that you must call the ToArgb method to convert Color to the 32-bit integer format required by DirectX.
Before we continue we need to let the device know which type of vertex we chose. We accomplish this by setting the VertexFormat property of the Device class to theFormat property of the vertex we used. This property determines the fixed function pipeline the device will use. Don't worry what exactly that is for right now; you just need to know that we are using the position and colored pipeline.
In the OnPaint method, immediately following the device.Clear method add the following code.
Visual C#
device.VertexFormat = CustomVertex.PositionColored.Format;
Visual Basic
device.VertexFormat = CustomVertex.PositionColored.Format
With the crosshairs defined, we must now tell the device to actually render the object described in the vertex buffer to the screen. This is accomplished using theDrawUserPrimitives method of the device class. So what are Primitives?
Drawing Primitives
Drawing Primitives are collections of vertices that define a single three-dimensional object. There are six primitives in DirectX listed in the PrimitiveType enumeration.
- Line List: Mainly used for adding Heads-up Display (HUD) information to a screen. The primitive count is equal to the number of points divided by two. The number of points must be even for this type to work.
- Line Strip: This has the same uses as the Line List but renders a single continuous line. The primitive count is equal the number of vertices minus 1.
- Point List: Mainly used for rendering individual points in particle images such as explosions or stars in the night sky. The primitive count is equal to the number of points in the vertex buffer.
- Triangle Fan: This is most useful when drawing an oval object.
- Triangle List: This is the most commonly used primitive. The primitive count is the number of vertices divided by three,
- Triangle Strip: These are most useful when rendering rectangular objects.
In our case we choose the LineStrip type to draw the crosshairs to the screen since it is an HUD. In the OnPaint method, immediately following the device.Clear method add the following code.
Visual C#
device.DrawUserPrimitives ( PrimitiveType.LineStrip, 6, CreateCrossHairVertexArrayTop ( ) ); device.DrawUserPrimitives ( PrimitiveType.LineStrip, 6, CreateCrossHairVertexArrayBottom ( ) );
Visual Basic
device.DrawUserPrimitives(PrimitiveType.LineStrip, 6, _ CreateCrossHairVertexArrayTop()) device.DrawUserPrimitives(PrimitiveType.LineStrip, 6, _ CreateCrossHairVertexArrayBottom())
The DrawUserPrimitives method requires us to pass in the PrimitiveType, the count of primitives to render, and the source of the vertex data for the object. Since we are using the LineStrip primitive, the seven points in each vertex buffer create six lines.
Up to this point we have not actually drawn anything to the screen, just cleared the device; so we need to tell the device that we are planning to do so. This is accomplished with the BeginScene method of the device class.
Begin Scene/End Scene
As I mentioned in the first article, there are a lot of correlations between movies and a DirectX game. The first we encountered was frame. Now we are going to add Scenes and little later a camera.
We use the BeginScene and EndScene methods of the device to define the starting and ending points of a scene. The BeginScene method prepares the device for the actions that follow by locking the back buffer. The EndScene method tells the device that we are finished drawing and unlocks the back buffer. You must always call EndScene after calling BeginScene otherwise the back buffer will remain locked. The BeginScene andEndScene methods work closely together with the Present method to mange the back buffer; if one of these method calls fails the others will fail also.
In the OnPaint method of the GameEngine class, add calls to BeginScene andEndScene. The call to BeginScene needs to occur immediately after the Clear method call.
Visual C#
device.Clear ( ClearFlags.Target, Color.Black, 1.0f, 0 );
// Tell DirectX we are about to draw something
device.BeginScene ( );
Visual Basic
device.Clear(ClearFlags.Target, Color.Black, 1.0F, 0)
' Tell DirectX we are about to draw something
device.BeginScene()
And the EndScene method needs to be called immediately before the Present method call.
Visual C#
// Tell DirectX that we are done drawing device.EndScene ( ); // Flip the back buffer to the front device.Present ( );
Visual Basic
' Tell DirectX that we are done drawing device.EndScene() ' Flip the back buffer to the front device.Present()
Now we are almost done. The last step remaining is to convert the world coordinates of each three-dimensional object into screen coordinates.
In the last article we covered a lot of the DirectX terminology you need to understand. We are almost done with new terms, but before we can successfully render a 3-D world to the screen we need to cover one last set of definitions.
Lights, Camera, Action
In our model we have described each point on the chair using Cartesian coordinates and a color value and stored these points in a vertex buffer. But DirectX still can't render the chair for me. Why? The missing pieces of information are our location in the room and the direction we are looking. We also need to determine how to handle the projection. Only with this information can the 3-D world be converted into the 2-D picture displayed on the screen.
In DirectX our (the viewer's) location is called the camera location and it is defined by theView Matrix. From now on consider the terms view and camera to be synonymous. The projection is the way distance is applied to the objects and can be compared to the lens setting of the camera.
Cameras are very powerful tools in creating cool 3-D games. You can attach the camera to a moving object to get the feeling of actually being inside, or offset it slightly to the rear to get a chase view. You can also set up multiple static cameras in your world and view the action by switching from camera to camera.
All of the computations involved in converting the world coordinates to screen coordinates are performed used a Matrix. These computations are called Transforms. Almost all of the heavy lifting performed in DirectX is the transformation of coordinates using Matrices.
Transforms (http://en.wikipedia.org/wiki/Transformation_%28mathematics%29): Transformations change the coordinates of three-dimensional objects based on the view, projection type and world transform specified. These transformations are accomplished via a set of 4x4 matrices.
Matrix (http://en.wikipedia.org/wiki/Matrix_%28mathematics%29): A rectangular table of numbers that is very efficient in transformations.
You could spend a lot of time understanding the exact details of transformations with matrices. Understanding how they work is important, but you will not have to perform any manual calculations on your own, so we can skip the details for now. The Matrix class contains a number of the most common methods for manipulating matrices.
The first step is placing and orienting the camera in our three-dimensional world.
View Transform
The view matrix defines the location of the camera and the orientation of the camera (by specifying a target). The view matrix also includes a value to determine which direction is considered to be up in our world. This is almost always the Y axis. You can either define your own matrix or use the built-in Matrix.LookAtLH and Matrix.LookAtRH methods. Since we are using the left handed coordinate system we naturally use the LookAtLHmethod (LH stands for Left Handed). Even if you do not explicitly define a view, then DirectX uses a default view.
In the OnPaint method of the GameEngine class, add the following code immediately after the device.Clear method call.
Visual C#
device.Transform.View = Matrix.LookAtLH(new Vector3 (0, 0, 5f), new Vector3(0, 0, 0), new Vector3(0, 1, 0));
Visual Basic
device.Transform.View = Matrix.LookAtLH(New Vector3(0, 0, 5.0F), _ New Vector3(0, 0, 0), New Vector3(0, 1, 0))
In Battle Tank 2005 we are placing the camera at the origin (0,0) and slightly forward on the Z axis. We then point the camera at the origin, and identify the Y axis as up by providing a value for Y.
After placing and orienting the camera we need to define the projection we are going to use.
Projection Transform
In DirectX there are two types of projections to choose from:
- Perspective (http://en.wikipedia.org/wiki/Perspective)
- Orthogonal (http://en.wikipedia.org/wiki/Orthogonal_projection)
The Perspective projection is the most commonly used projection and is how humans see the world. In this projection objects appear smaller the further away from us they are and become deformed at some distance (like a road appears to converge toward the horizon).
Orthogonal projection, on the other hand, ignores distance (the Z value) so items retain their size regardless of their distance from the camera.
We are going to use the Perspective Projection in BattleTank 2005. The projection determines how the vertices of the objects in the viewing frustum (http://en.wikipedia.org/wiki/Viewing_frustum) are transformed. The viewing frustum defines the three-dimensional space in which objects are visible to the camera. You can visualize this as a pyramid with the top cut off. The base of the pyramid is the far plane, the top the near plane, and the field of view is the angle at the apex. To perform this transformation we need to know four pieces of information.
- Field of View (FoV): This is typically 45 degrees or ¼ of p (Math.PI / 4). Reducing the FoV value is like zooming in on the scene, while a larger FoV value is like zooming out. A value greater then 45 degrees is like looking at the scene through a fisheye lens.
- Aspect Ratio: This is identical to the aspect ratio of your TV or Monitor and is always calculated as the viewport Width/Height. The viewport is nothing other than the form onto which we are rendering the game. Standard aspect ratios for computer screens are 1.33 (640x480 or 1280x1024).
- Near Clipping Plane: Objects closer to the camera than this plane are not rendered.
- Far Clipping Plane: Objects beyond this plane are not rendered.Radians (http://en.wikipedia.org/wiki/Radian): In DirectX most angles are expressed as radians rather than degrees. To convert from degrees to radians, simple multiply the degree value by p/180, where n is the measure of the angle in degrees. The DirectXDX library contains helper functions to perform the conversion for you in the Geometry class.
Using these values and some fancy math, DirectX transforms the vertices of each object from world coordinates to screen coordinates. To a certain extent, this transformation is what we do in our heads when we draw a picture of a three-dimensional space. We draw distant objects smaller, even though the actual object has not really changed its size, and we distort objects further away.
In the OnPaint method of the GameEngine class, add the following code immediately after the device.Transform.View method call added in the previous step.
Visual C#
device.Transform.Projection = Matrix.PerspectiveFovLH ( (float)Math.PI / 4, this.Width / this.Height, 1.0f, 100.0f);
Visual Basic
device.Transform.Projection = Matrix.PerspectiveFovLH ((single)Math.PI / _ 4, Me.Width / Me.Height, 1.0f, 100.0f)
For BattleTank 2005 we used the traditional setting for the FoV of 45 degrees. Next we set the aspect ratio base and then define the viewing frustum as between 1 and 100 in our world coordinate space. The values for the clipping planes define the size of our visible world and can represent anything we want. One unit equals 10 meters in BattleTank 2005 so we have a 1 kilometer playing field.
At this point you have to use some common sense when defining the viewing frustum. We could easily declare that each unit equals one kilometer, making the viewable area 100 kilometers deep. However, if we make the tanks regular size, you will never be able to see them beyond a couple of kilometers, so why waste resources rendering them to the screen if the player cannot see them?
While the View and Projection Matrices describe the camera and camera lens, the world transformation matrix converts the model space coordinates into world space coordinates. These world space coordinates are then converted to screen coordinates in the View and Projection transforms.
World Transform
The last transform is a world transform. This transforms the object we are rendering from model space into world space.
In a world transform you can move, rotate and scale each object. This transform applies to all objects drawn after setting the transform, until a new transform is specified.
Model Space: We define each object by defining each vertex with respect to the model.
In BattleTank 2005 we are not going to use the World transform at this time, but I have included a transform in the code for this article that uses a test cube for you to experiment with. I suggest you play with changing the values for all three transforms and see how the results look on the screen. That is the easiest way to really understand what the various settings do. See the Experimenting section for details on how to do this.
Lights
Whenever we draw a primitive using world coordinates, DirectX uses lighting to determine the color of each pixel. But since we have not yet defined a light source, we can simply turn the lights off at this time. If we do not turn lighting off, then DirectX assumes no lights are shining on the scene and renders each pixel as black.
In the OnPaint method of the GameEngine class, add the following code immediately after the device.Transform.Projection method call added in the previous step.
Visual C#
// turn off the light source device.RenderState.Lighting = false;
Visual Basic
' turn off the light source device.RenderState.Lighting = False
In addition to controlling the lighting, the RenderState also allows us to control the culling behavior.
Culling
Culling is an operation that eliminates entire objects (unlike clipping, which removes only portions) from the scene that fall outside of the viewing frustum to reduce the total set of objects to render. The overall goal is of course speed; by eliminating the non essential objects the scene can be rendered faster.
Clipping: Clipping discards portions of any single object that fall outside of the viewing frustum. Clipping is automatically managed by DirectX and requires no further intervention.
When drawing a three-dimensional object, DirectX does not render the primitives (triangles) that comprise the faces of those objects that do not face the camera. This is called back face culling.
DirectX determines which side of an object is facing the camera by using the order (winding) of the vertices. If you choose either clockwise or counterclockwise, then the vertices that are wound the opposite are on the back of the object and are culled. The default mode is counterclockwise culling, so you need to make sure to define your vertices in a clockwise order.
The culling options are set in the RenderState by assigning one of the Cull enumeration settings to the device.RenderState.CullMode property.
In the OnPaint method of the GameEngine class, add the following code immediately after device.RenderState.Lighting code added in the previous step.
Visual C#
// Turn off backface culling
device.RenderState.CullMode = Cull.None;
Visual Basic
' Turn off backface culling
device.RenderState.CullMode = Cull.None
In the code that accompanies this article I have added a couple of extra items to make it easier for you to experiment with the various settings we just covered.
Experimenting
To remove the HUD display, comment out the following lines: 122 and 123 (in VB.NET the lines are 92 and 93).
You can also display a rotating cube by un-commenting lines 98, 101, 120 and commenting line 95 (VB.NET: 68, 71, 87, 90 and 65). You can change any of the values in the models or adjust the View, Projection and World transforms to see the effect of your changes.
Line Numbers: To turn on line numbers, go to Tools | Options | Text Editor | All Languages and check the Line Numbers box in the Display group. If you don't see the full options tree, just check the Show all Settings option in the lower left of the Options dialog.
Summary
In these first three articles we have covered a lot of ground. The steps in the upcoming articles build upon the foundation laid in these first three articles. At this point you should know how to create a device and hook it to a Windows form and create a game loop using the OnPaint and Invalidate methods. You should be able to create and draw your own three-dimensional objects using vertex buffers and the DrawUserPrimitives method, set up a camera, and transform the models into world space.
If something is unclear at this time, you should review the links provided in the articles, the DirectX SDK, or go to one of the resources listed on the web. Once you have fully understood these principles, the rest of the game development process will be much easier.
With this knowledge you are ready to perform all but the most advanced operations in DirectX. The only steps required to add more objects to our world in BattleTank 2005 are to define them in model space, store them in a vertex buffer, and render them to the screen. As you imagine, complex objects consist of thousands of separate vertices, and defining them all in code is almost impossible, very error prone and extremely boring. So in the next article I will cover how to accomplish this task more efficiently. In the next article we are also going to finish of the last graphic pieces and then focus on controlling the tank using the classes in the DirectInput namespace.
0 comments:
Post a Comment