1 / 56

Computer Graphics - Discrete Techniques -

Computer Graphics - Discrete Techniques -. Hanyang University Jong-Il Park. Objectives. Buffers and pixel operations Mapping methods Texture mapping Environmental (reflection) mapping Variant of texture mapping Bump mapping Solves flatness problem of texture mapping Blending

andres
Download Presentation

Computer Graphics - Discrete Techniques -

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Graphics- Discrete Techniques - Hanyang University Jong-Il Park

  2. Objectives • Buffers and pixel operations • Mapping methods • Texture mapping • Environmental (reflection) mapping • Variant of texture mapping • Bump mapping • Solves flatness problem of texture mapping • Blending • Anti-aliasing

  3. Buffer Define a buffer by its spatial resolution (n x m) and its depth (or precision) k, the number of bits/pixel pixel

  4. OpenGL Frame Buffer

  5. OpenGL Buffers • Color buffers can be displayed • Front • Back • Auxiliary • Overlay • Depth • Accumulation • High resolution buffer • Stencil • Holds masks

  6. Writing in Buffers • Conceptually, we can consider all of memory as a large two-dimensional array of pixels • We read and write rectangular block of pixels • Bit block transfer (bitblt) operations • The frame buffer is part of this memory memory source frame buffer (destination) writing into frame buffer

  7. The Limits of Geometric Modeling • Although graphics cards can render over 10 million polygons per second, that number is insufficient for many phenomena • Clouds • Grass • Terrain • Skin

  8. Modeling an Orange • Consider the problem of modeling an orange (the fruit) • Start with an orange-colored sphere • Too simple • Replace sphere with a more complex shape • Does not capture surface characteristics (small dimples) • Takes too many polygons to model all the dimples

  9. Modeling an Orange (2) • Take a picture of a real orange, scan it, and “paste” onto simple geometric model • This process is known as texture mapping • Still might not be sufficient because resulting surface will be smooth • Need to change local shape • Bump mapping

  10. Three Types of Mapping • Texture Mapping • Uses images to fill inside of polygons • Environment (reflection mapping) • Uses a picture of the environment for texture maps • Allows simulation of highly specular surfaces • Bump mapping • Emulates altering normal vectors during the rendering process

  11. Texture Mapping geometric model texture mapped

  12. Environment Mapping

  13. Bump Mapping

  14. Where does mapping take place? • Mapping techniques are implemented at the end of the rendering pipeline • Very efficient because few polygons make it past the clipper

  15. Is it simple? • Although the idea is simple---map an image to a surface---there are 3 or 4 coordinate systems involved 2D image 3D surface

  16. Coordinate Systems • Parametric coordinates • May be used to model curves and surfaces • Texture coordinates • Used to identify points in the image to be mapped • Object or World Coordinates • Conceptually, where the mapping takes place • Window Coordinates • Where the final image is really produced

  17. Texture Mapping parametric coordinates texture coordinates window coordinates world coordinates

  18. Mapping Functions • Basic problem is how to find the maps • Consider mapping from texture coordinates to a point on a surface • Appear to need three functions x = x(s,t) y = y(s,t) z = z(s,t) • But we really want to go the other way (x,y,z) t s

  19. Backward Mapping • We really want to go backwards • Given a pixel, we want to know to which point on an object it corresponds • Given a point on an object, we want to know to which point in the texture it corresponds • Need a map of the form s = s(x,y,z) t = t(x,y,z) • Such functions are difficult to find in general

  20. Two-part mapping • One solution to the mapping problem is to first map the texture to a simple intermediate surface • Example: map to cylinder

  21. Box Mapping • Easy to use with simple orthographic projection • Also used in environment maps

  22. Second Mapping • Map from intermediate object to actual object • Normals from intermediate to actual • Normals from actual to intermediate • Vectors from center of intermediate actual intermediate

  23. Aliasing • Point sampling of the texture can lead to aliasing errors point samples in u,v (or x,y,z) space miss blue stripes point samples in texture space

  24. Area Averaging A better but slower option is to use area averaging pixel preimage Note that preimage of pixel is curved

  25. Basic Stragegy Three steps to applying a texture • specify the texture • read or generate image • assign to texture • enable texturing • assign texture coordinates to vertices • Proper mapping function is left to application • specify texture parameters • wrapping, filtering

  26. y z x t s Texture Mapping display geometry image

  27. Mapping a Texture • Based on parametric texture coordinates • glTexCoord*() specified at each vertex Texture Space Object Space t 1, 1 (s, t) = (0.2, 0.8) 0, 1 A a c (0.4, 0.2) b B C (0.8, 0.4) s 0, 0 1, 0

  28. Typical Code glBegin(GL_POLYGON); glColor3f(r0, g0, b0); //if no shading used glNormal3f(u0, v0, w0); // if shading used glTexCoord2f(s0, t0); glVertex3f(x0, y0, z0); glColor3f(r1, g1, b1); glNormal3f(u1, v1, w1); glTexCoord2f(s1, t1); glVertex3f(x1, y1, z1); . . glEnd();

  29. Texture Polygon Texture Polygon Magnification Minification Magnification and Minification More than one texel can cover a pixel (minification) or more than one pixel can cover a texel (magnification) Can use point sampling (nearest texel) or linear filtering ( 2 x 2 filter) to obtain texture values

  30. Environment mapping • Environmental mapping is way to create the appearance of highly reflective surfaces without ray tracing which requires global calculations • Examples: The Abyss, Terminator 2 • Is a form of texture mapping • Supported by OpenGL and Cg

  31. Example

  32. Reflecting the Environment N V R

  33. Mapping to a sphere N V R

  34. Cube Map

  35. Issues • Must assume environment is very far from object (equivalent to the difference between near and distant lights) • Object cannot be concave (no self reflections possible) • No reflections between objects • Need a reflection map for each object • Need a new map if viewer moves

  36. Bump mapping

  37. Modeling an Orange • Consider modeling an orange • Texture map a photo of an orange onto a surface • Captures dimples • Will not be correct if we move viewer or light • We have shades of dimples rather than their correct orientation • Ideally we need to perturb normal across surface of object and compute a new color at each interior point

  38. Bump Mapping (Blinn) • Consider a smooth surface n p

  39. Rougher Version n’ p’ p

  40. Displacement Function p’ = p + d(u,v) n d(u,v) is the bump or displacement function |d(u,v)| << 1

  41. Approximating the Normal n’ = p’up’v ≈ n + (∂d/∂u)n pv + (∂d/∂v)n  pu • The vectors n  pvand n  pu lie in the tangent plane • Hence the normal is displaced in the tangent plane • Must precompute the arrays ∂d/ ∂u and ∂d/ ∂v • Finally, we perturb the normal during shading

  42. Image Processing • Suppose that we start with a function d(u,v) • We can sample it to form an array D=[dij] • Then ∂d/ ∂u ≈ dij – di-1,j and ∂d/ ∂v ≈ dij – di,j-1 • Embossing: multipass approach using accumulation buffer

  43. Eg. Bump mapping

  44. Opacity and Transparency • Opaque surfaces permit no light to pass through • Transparent surfaces permit all light to pass • Translucent surfaces pass some light translucency = 1 – opacity (a) opaque surface a =1

  45. Physical Models • Dealing with translucency in a physically correct manner is difficult due to • the complexity of the internal interactions of light and matter • Using a pipeline renderer

  46. Writing Model • Use A component of RGBA (or RGBa) color to store opacity • During rendering we can expand our writing model to use RGBA values blend source blendingfactor source component destination component Color Buffer destination blending factor

  47. Blending Equation • We can define source and destination blending factors for each RGBA component s = [sr, sg, sb, sa] d = [dr, dg, db, da] Suppose that the source and destination colors are b = [br, bg, bb, ba] c = [cr, cg, cb, ca] Blend as c’ = [br sr+ cr dr, bg sg+ cg dg , bb sb+ cb db , basa+ cada]

  48. Fog • We can composite with a fixed color and have the blending factors depend on depth • Simulates a fog effect • Blend source color Csand fog color Cf by Cs’=f Cs + (1-f) Cf • f is the fog factor • Exponential • Gaussian • Linear (depth cueing)

  49. Fog Functions

  50. OpenGL Fog Functions GLfloat fcolor[4] = {……}: glEnable(GL_FOG); glFogf(GL_FOG_MODE, GL_EXP); glFogf(GL_FOG_DENSITY, 0.5); glFOgv(GL_FOG, fcolor);

More Related