690 likes | 822 Views
Video Games Part 2 TM Special thanks to Kevin Egan. Game Genres Action Adventure Educational Strategy Real time (RTS) and turned based RPG CRPG, MMORPG Puzzle Shooter First Person Shooter (FPS) Sports Platformer Racing Simulation Fighting
E N D
Video Games Part 2TM Special thanks to Kevin Egan Jordan Parker
Game Genres Action Adventure Educational Strategy Real time (RTS) and turned based RPG CRPG, MMORPG Puzzle Shooter First Person Shooter (FPS) Sports Platformer Racing Simulation Fighting The Action Hybrids: Action/Adventure, Action/RPG Jordan Parker
What is a game engine? The core software component Car analogy The game engine… Takes input Computes physics (collisions, projectiles, etc.) Plays sounds Simulates AI Draws stuff! A good game engine is independent of the game type Writing a good game engine takes years so most developers license engines Current: Quake III, Unreal Tournament Future: Source, Doom III, Unreal Warfare Ever notice that a lot of games look alike? Well, what’s a game then? Gameplay Levels, music, artwork, story, etc. Jordan Parker
Basic Game Loop while (!gameover) { get input process input simulate physics and AI redraw } As CS123 graduates, you can all do this Sceneview: level file format and (bad) scene data structure Camtrans: Takes care of camera and math Modeler: User manipulations Intersect: Basics of collision You can hack together a FPS without too much effort Jordan Parker
Sound Stereo doesn’t cut it anymore – you need positional audio. Positional audio increases immersion The Old: Vary volume as position changes The New: Head-Related Transfer Functions (HRTF) for 3d positional audio with 2-4 speakers Games use: Dolby 5.1: requires lots of speakers Creative’s EAX: “environmental audio” Aureal’s A3D: probably the best positional audio DirectSound3D: Microsoft’s answer OpenAL: open, cross-platform API Jordan Parker
Physics Sick of seeing the same death animation every time? Enter “rag doll physics.” Real physics is computationally expensive, so lots of approximations (like bounding boxes) are made. Realism/Performance tradeoff Most explosions and projectilesare scripted, not simulated “Good” physics engines are nowa requirement for a game engine Unreal and Source use HAVOK Doom III rolled its own and willeven have per-polygon hit detection! Realistic physics != good game (Trespasser) Can use spacial data structure to speed up collision tests as well Particle systems Demo: Stair and Truck Dismount Jordan Parker
AI And you thought graphics was hacky! The Old: Simple FSMs, Crash ‘n’ Turn The New: Glorified FSMs (scripting, waypoints, goals…), A* Creating an AI that will kick your butt is child’s play. Believable AI is very difficult. One of the easiest ways to get into the industry Download a SDK, make a bot, get hired. AI continues to get better as people “read the literature” and the GPU lightens the load on the CPU Jordan Parker
Black and White (2000) The most advanced AI seen in a video game to date. God game where you have to train your avatar to behave. Unfortunately it’s not that great of a game… Jordan Parker
Networking Packets will get lost. The internet is unpredictable. Use non-blocking I/O and multithreading TCP/IP is too slow! Too much error correction Waits for ACK before sending next packet Almost all games use UDP instead Since we’re going to use UDP, it’s up to us to do the error correction. No one really had a good solution until… John Carmack brought fast paced action gaming to modems with QuakeWorld (1996) and set up the standard client/server model still used today. Jordan Parker
The Client/Server Model for Games The server simulates the game at X Hz and is the authoritative state of the game. It sends packets to the clients. The multiple clients are all playing the game at different speeds (> X usually) and sending information to the server. Problem: The client is updating at a different rate than the server. Solution: Interpolation! Problem: What if there is a hiccup and we miss a packet from the server? Solution: Extrapolation and reconstruction from delta compression Jordan Parker
I thought this was a graphics class? Cheating is “avoided” by the server containing the only real state of the system and having it make all of the big decisions (i.e. deaths aren’t extrapolated) If you thought this was hard, what about MMORPGs that must support a few thousand players at once on the same server? Asheron’s Call has the most impressive networking system: all servers are the same world. Yikes! Fun Fact: Turbine was founded by Brown graduates and Andy was chairman of the board for a time! Jordan Parker
Single Instruction Multiple Data (SIMD) The real way vector algebra is done Exploit data parallelism Cram vectors (four 32-bit floats) into 128-bit registers and do things in one instruction instead of four x86: MMX (ints), SSE (floats), SSE2 (doubles) PPC: AltiVec PS2’s EE has two VPUs, VU0 and VU1 Btw, the PSone and PS2 are MIPS based. CS31 comes in handy after all! Xbox has a PIII => SSE Gamecube has a PPC but doesn’t have full AltiVec Hardware T&L does this too Jordan Parker
Camera When games made the transition from 2d to 3d, a good camera system became essential Golden Rule: The camera should not interfere with gameplay. Your camera system should be defined by the gameplay you’re trying to achieve. A good camera is very hard to achieve Some control must be given to the user, but how much? Three main techniques: First Person Third Person Fixed, Free, or Follow “Isometric” Personally, I’d like to see a second person camera system Jordan Parker
First Person Popularized by id software Dominates PC gaming Good for: FPS Immersive storytelling Bad for: Platformers* (%^!&ing jumping puzzles!) Strategy games *see the next slide Jordan Parker
Excellence in First Person Metroid Prime is an exception to the platformer rule Transition to third person fixed or follow when in ball mode Jordan Parker
Third Person Most prevalent choice, especially on consoles Fixed: Camera doesn’t move. Ever. Resident Evil series, Final Fantasy VII-IX Follow: The camera does move to follow the player but camera movement is very restricted Sports games, Crash Bandicoot, Panzer Dragoon, Devil May Cry Most common of the three types Free: The camera doesn’t always follow in a linear manner. This flavor gives the user the most control over the camera. Super Mario 64, MMORPGs Hardest mode to get right More restriction is good for the developers, but bad for the player Selection/targeting is difficult Jordan Parker
Excellence in Third Person Super Mario 64 had the first legitimate third person camera system in a 3d environment. Games are still copying it today. Further refined in The Legend of Zelda: Ocarina of Time Read about it in Game Programming Gems II And yet some how they screwed it up in Super Mario Sunshine… Jordan Parker
“Isometric” Video games have been using isometric projection for ages. It all started in 1982 with Q*Bert and Zaxxon which were made possible by advances in raster graphics hardware Still in use today when you want to see things in the distance as well as things close up (e.g. strategy, simulation games) Technically most games today aren’t isometric and are instead axonometric, but people still call them isometric to avoid learning a new word. Other inappropriate terms used for axonometric views are “2.5D” and “three-quarter.” Jordan Parker
Issue: Clipping Planes Have you ever played a video game and all of the sudden some object pops up in the background (e.g. a tree in a racing game)? That’s the object coming inside the far clip plane. The old hack to keep you from noticing the pop-up is to add fog in the distance. A classic example of this is from Turok: Dinosaur Hunter Now all you notice is fog and how little you can actually see. This practically defeats the purpose of an outdoor environment! And you can still see pop-up from time to time. Thanks to fast hardware and level of detail algorithms, we can push the far plane back now and fog is much less prevalent. However this hurts Z precision and can lead to Z-fighting. Putting the near clip plane back a bit can help recover some precision Jordan Parker
Issue: Orientation Using Euler angles (pitch, yaw, spin) can lead to Gimbal lock and poor interpolation Quaternions solve the problem 4d complex numbers 3d dimensional vector + scalar component Spherical linear interpolation (slerp!) axis-angle representation is equivalent I covered this in a section so I won’t go over it all again Scripting camera paths is also complicated The path is typically a spline and the camera orientation is interpolated along the path Jordan Parker
Texturing Nothing is more important than texture performance and quality. Textures are used for absolutely everything. Fake shading Fake detail Fake effects Fake geometry Geometry is expensive – you gotta store it, transform it, light it, clip it… bah! Use them in ways they aren’t supposed to be used An image is just an array after all If it weren’t for textures, we’d be stuck with big Gouraud shaded polys! Quick hardware texture review Interpolation is linear in 1/z Jordan Parker
Multipass Rendering In 123, everything we’ve done has been in one pass but in reality you won’t get anywhere with that. Multipass rendering gives you flexibility and better realism An early version of Quake 3 did this: (1-4: Accumulate bump map) 5: Diffuse lighting 6: Base texture (7: Specular lighting) (8: Emissive lighting) (9: Volumetric effects) (10: Screen flashes) Multitexturing is the most important part of multipass rendering (remember all of those texture regs?) Jordan Parker
Billboards A billboard is a flat object that faces something There are lots of different billboarding methods, but we’ll stick with the easiest, most used one Take a quad and slap a texture on it. Now we want it to face the camera. How do we do that? (Hint: you just did it in modeler) Bread and butter of older 3d games and still used extensively today Monsters (think Doom) Items Impostors (LOD) Text HUDs (sometimes) Faked smoke, fire, explosions, particle effects, halos, etc. #*$&ing lens flares Bad news: Little to no shading Jordan Parker
Aliasing when scaling up • Bilinear Filtering (a.k.a. Bilinear Interpolation) • Interpolate horizontally by the decimal part of u and vertically interpolate the horizontal components by the decimal part of v x = floor(u) a = u - x y = floor(v) b = v – y T(u,v) = (1 – a)[(1 – b)T(x, y) + bT(x, y + 1)] + a[(1 – b)T(x + 1, y) + bT(x + 1, y + 1)] = (1 – a)(1 – b)T(x, y) + a(1 – b)T(x + 1, y) + (1 – a)bT(x, y + 1) + abT(x + 1, y + 1) • This is essentially what you did in filter when scaling up • Hardware can do this almost for free and I can’t think of a card that doesn’t do it by default • Not so free in a software engine Jordan Parker
Mipmapping • Mip = multum in parvo (many in a small place) • Solves the aliasing problem when scaling down • It’s possible for more than one texel may cover the area of a pixel (edges of objects, objects in the distance…). We could find all texels that fall under that pixel and blend them, but that’s too much work • This problem causes temporal aliasing • Will bilinear filtering help? Will it solve the problem? • Solution: more samples per pixel or lower the frequency of the texture • Mipmapping solves the problem by taking the latter approach • Doing this in real time is too much work so we’ll precompute • Take the original texture and reduce the area by 0.25 until we reach a 1 x 1 texture • Use a good filter and gamma correction when scaling • If we use a Gaussian filter, this is called a Gaussian pyramid • “Predict” how bad the aliasing is to determine which mipmap level to use • How much more memory are we using? • Can potentially increase texture performance (Lars story) • Cards do mipmapping and bilinearfiltering by default. A good deal ofconsole games don’t do mipmapping,why? Jordan Parker
Problem Solved… Jordan Parker
We’re good. A little too good. We got rid of aliasing, but now everything is too blurry! Let’s take more samples. Take a sample from the mipmap level above and below the current one and do bilinear filtering on the current mipmap level => Trilinear Filtering Trilinear filtering makes it look a little better but we’re still missing something… If we’re going to take even more samples we better be taking them correctly. Key observation: suppose we take a pixel and backwards map it onto a texture. Is the pixel always a nice little square* with sides parallel to the texture walls? NO! Bilinear and trilinear filtering are isotropic because they sample the same distance in all directions. Now we’re going to sample more where it is actually needed * Of course, a pixel is NOT a tiny little square. But let’s suppose it is… Jordan Parker
Anisotropic Filtering Anisotropic = not isotropic (surprise). Also called aniso or AF for short. There are a couple of aniso algorithms that don’t use mipmapping but our cards already do mipmapping really well so we’ll build off of that. When the pixel is backwards mapped, the longest side of the quad determines the line of anisotropy and we’ll take a hojillion samples along that line across mipmaps. Aniso and FSAA are the two big features of today’s modern cards ATI and NVIDIA have different algorithms that they guard secretively and continue to improve/screw up We could be taking up to 128 samples per pixel! This takes serious bandwidth. This is orders of magnitude more costly than bilinear (4) or trilinear (6) filtering. Pictures! Jordan Parker
Aniso Rules (1/3) richleader.com Jordan Parker
Aniso Rules (2/3) Serious Sam extremetech.com Jordan Parker
Aniso Rules (3/3) Serious Sam extremetech.com Jordan Parker
Texture Generation Who needs artists? Procedural Texturing Use a random procedure to generate the colors Perlin noise (not just for color) Good for wood, marble, water, fire… Unreal Tournament did it quite a bit Texture Synthesis No games use this to my knowledge Efros & Leung, Ashikhmin use neighborhood statistics Cohen (Siggraph 2003) has a much faster tile based method Jordan Parker
Light Mapping We’d like soft lighting and soft shadows but the fixed function pipeline won’t let us have our way. Plus, real lighting is slow once we involve multiple lights. Hmm… Most of our world geometry is static We can blend multiple textures together in multiple passes (multitexturing) Radiosity is good at diffuse Radiosity is view independent Let’s precompute the global illumination (sans specular) using radiosity, store it in a light map, and blend that with the detail texture That’s the gist of it. Implementing it can be tricky. Don’t need to use radiosity either Fun Fact: id software used tohave a SGI Origin 2000 (16 x 180mhz, 1.2GB RAM) to crunch maps. They sold it on ebay in 2000. Note: probably should be called dark mapping… Jordan Parker
No Light Mapping Quake 3: Arena nvnews.net Jordan Parker
Light Mapping! Quake 3: Arena nvnews.net Jordan Parker
(Cubic) Environment Mapping Approximate the reflections of arbitrary surfaces by, you guessed it, precomputation A cube map is a 3d texture that’s shaped like a cube How to create a cubic environment map: Place the camera somewhere (the cube map center) Render the scene thus creating one side of the cube map Repeat the last step rotating the camera 90o in all directions to make the other five sides Cubic environment map generation is expensive, but can be done in real time. If the geometry and shading is static, there is no need for real time generation. Cast your ray from the eye and reflect it about a normal as usual except use the reflected vector as an index into the cube map to get the reflected color The reflected vector need not be normalized We’re making some assumptions here (the object doesn’t reflect itself, for example) Gerald Schröcker Jordan Parker
Environment Mapping Example developer.nvidia.com Cube maps have other uses too (like normalization) Jordan Parker
Dot3 Bump Mapping a.k.a normal mapping Gets its name from the three element dot product used A normal map contains normals instead of colors at each texel Basic Idea: Interpolate vector to light across the surface and dot it with the appropriate normal in the normal map Vertex prog: slap light vector in as interpolated register Pixel prog: normalize the interpolated vector with a cube map, do the dot, and shade Really good looking and you’ll be seeing it used a lot Doom III uses it extensively Not the only way to do bump mapping, btw Jordan Parker
Is this a real sphere? Jordan Parker
Nope. Jordan Parker
Boo! Jordan Parker
Environment Bump Mapping Jordan Parker
Geometry Images Hoppe, et al. Siggraph 2002 (http://research.microsoft.com/~hoppe/) Use a texture to encapsulate a geometric model but slicing the model open and laying it flat We’re good at compressing images => great geometry compression Lossy compression is free mipmapping! Will standard image processing algorithms work on a geometry image? Not used in anything whatsoever, but it’s really neat. I’d like to see it done in hardware. Jordan Parker
Culling There’s way too much stuff to draw The card can only help you so much (i.e. backface culling) Solving the visibility problem isn’t done by the spatial data structure! Quake 3 just uses the BSP to find where the camera is (and some other non-graphics stuff). Portal culling solves the visibility problem. If you did an octree in ray, the octree didn’t tell you what to draw. Casting rays did and an octree just makes ray casting faster. Goal: Figure out what we absolutely have to draw and figure it out fast The Culling Trinity: Backface culling: You know this one. View frustum culling: Compare objects to the view frustum Occlusion culling: Eliminate objects that are blocked by other objects Portal culling Doing generalized occlusion culling fast is really hard Culling returns a potentially visible set (PVS). Conservative vs. approximate Ideally we’d like the exact visible set… Jordan Parker
View Frustum Culling Construct the six planes of the view frustum and do simple in/out tests Could do a frustum sphere test ahead of time Use bounding spheres/cubes on objects in scene Let’s to the culling in clip space because it makes the checking really easy In clip space, if x is >= -1 then the point is inside the left plane. If tx/tw is our world space x coordinate transformed into clip space, then tx/tw >= -1 thus tx + tw >= 0 if the point is inside the left plane tx = dot(row 0 of C, p) tw = dot(row 3 of C, p) tx + tw = dot(row 0 + row 3, p) p is the point in world space, C is the entire camera transform www2.ravensoft.com/users/ggribb/plane%20extraction.pdf Jordan Parker
Portal Culling Technically falls under occlusion culling, but it’s so important that it is treated separately Divide the world into cells (e.g. rooms) and cells are connected by portals (e.g. windows, doors). An adjacency graph describes the connections. High level pseudocode: Draw current cell with frustum culling For each portal in the cell, see if the portal is in the frustum (this is done with 2d AABB intersections) If the portal is in the frustum, we know we can see the cell on the other side of the portal. Recurse on than portal. Else, we can’t see that cell or any cells connected to that cell (big win!) Quake 3 precalculates each cell’s PVS with portals, stores it in the BSP node, and draws with frustum culling. Quake 3 also uses cells to cut off sound Portals can also be used for neat mirror tricks, “real” portals, etc. This doesn’t work too well in large, outdoor environments. Why? Jordan Parker
Portal Example Jordan Parker
Shadows The Good: Shadow Volumes Represent shadows with actual geometry More on the next slide The Bad: Shadow Mapping Render the scene at low resolution from the light’s perspective and make a texture out of the Z-buffer. Render the scene from the camera and projectively texture the Z-buffer texture from the light Widely used but fading fast thanks to fast shadow volume algorithms Big, chunky aliasing and lots of crawlies caused by low resolution and perspective Only really works for one light and has poor self-shadowing The Ugly: Shadow Hackin’ Black circle texture under your dude Squash geometry onto flat surface Very hot area of research right now thanks to hardware advances Including our own Morgan, Spike, and Kevin! developer.nvidia.com/object/fast_shadow_volumes.html Jordan Parker
Stencil Shadow Volumes (1/2) A shadow volume is formed by extruding silhouette edges to infinity Point light shadow volumes fan out What do directional light volumes do? Use the stencil buffer to do point-in-polyhedra (PiP) test 3d version of point-in-polygon If a point is in shadow, it intersects the shadow volume an odd number times. Depth test pass/fail is how we count Z-fail testing also called Carmack’s Reverse Z-pass makes sense, but has issues. Z-fail is non-intuitive but robust Use a special projection matrix so infinite points aren’t clipped when we draw the volume (take the limit as the far plane goes to infinity) Issues when the camera is inside the shadow volume Jordan Parker
Stencil Shadow Volumes (2/2) Basic idea: draw the scene with only ambient light (gives us our depth values) for each light reset the count (clear the stencil buffer) for each occluder “draw” the shadow volume (count w/ stencil) turn the light on draw the scene rendering pixels only when the count is zero This probably doesn’t make any sense I really encourage you to look into it because it’s really cool and I don’t have enough time Wookiees do not live on Endor This algorithm is fill rate dependant. Look at how many times we draw the scene! It’s a multipass algorithm. Shadows are pixel correct Sparingly used today (mostly xbox games) Doom III uses it everywhere Say goodnight to shadow mapping Jordan Parker