1 / 106

CS361

Week 12 - Monday. CS361. Last time. What did we talk about last time? Image processing Filtering kernels Color correction Lens flare and bloom Depth of field Motion blur Fog. Questions?. Project 4. Review. Shading. Lambertian shading.

adeola
Download Presentation

CS361

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Week 12 - Monday CS361

  2. Last time • What did we talk about last time? • Image processing • Filtering kernels • Color correction • Lens flare and bloom • Depth of field • Motion blur • Fog

  3. Questions?

  4. Project 4

  5. Review

  6. Shading

  7. Lambertian shading • Diffuse exitanceMdiff = cdiff ELcosθ • Lambertian (diffuse) shading assumes that outgoing radiance is (linearly) proportional to irradiance • Because diffuse radiance is assumed to be the same in all directions, we divide by π • Final Lambertian radiance Ldiff =

  8. Specular shading • Specular shading is dependent on the angles between the surface normal to the light vector and to the view vector • For the calculation, we compute h, the half vector half between v and l

  9. Specular shading equation • The total specularexitance is almost exactly the same as the total diffuse exitance: • Mspec = cspec ELcosθ • What is seen by the viewer is a fraction of Mspec dependent on the half vector h • Final specular radiance • Lspec = • Where does m come from? • It's the smoothness parameter

  10. Implementing the shading equation • Final lighting is:

  11. Aliasing

  12. Screen based antialiasing • Jaggies are caused by insufficient sampling • A simple method to increase sampling is full-scene antialiasing, which essentially renders to a higher resolution and then averages neighboring pixels together • The accumulation buffer method is similar, except that the rendering is done with tiny offsets and the pixel values summed together

  13. FSAA schemes A variety of FSAA schemes exist with different tradeoffs between quality and computational cost

  14. Multisampleantialiasing • Supersampling techniques (like FSAA) are very expensive because the full shader has to run multiple times • Multisampleantialiasing (MSAA) attempts to sample the same pixel multiple times but only run the shader once • Expensive angle calculations can be done once while different texture colors can be averaged • Color samples are not averaged if they are off the edge of a pixel

  15. Transparency

  16. Sorting • Drawing transparent things correctly is order dependent • One approach is to do the following: • Render all the opaque objects • Sort the centroids of the transparent objects in distance from the viewer • Render the transparent objects in back to front order • To make sure that you don't draw on top of an opaque object, you test against the Z-buffer but don't update it

  17. Problems with sorting • It is not always possible to sort polygons • They can interpenetrate • Hacks: • At the very least, use a Z-buffer test but not replacement • Turning off culling can help • Or render transparent polygons twice: once for each face

  18. Depth peeling • It is possible to use two depth buffers to render transparency correctly • First render all the opaque objects updating the depth buffer • On the second (and future) rendering passes, render those fragments that are closer than the z values in the first depth buffer but further than the value in the second depth buffer • Repeat the process until no pixels are updated

  19. Texturing

  20. Texturing • We've got polygons, but they are all one color • At most, we could have different colors at each vertex • We want to "paint" a picture on the polygon • Because the surface is supposed to be colorful • To appear as if there is greater complexity than there is (a texture of bricks rather than a complex geometry of bricks) • To apply other effects to the surface such as changes in material or normal • Textures are usually images, but they could be procedurally generated too

  21. Texture pipeline • Transformed value • We never get tired of pipelines • Go from object space to parameter space • Go from parameter space to texture space • Get the texture value • Transform the texture value • The u, v values are usually in the range [0,1]

  22. Magnification • Magnification is often done by filtering the source texture in one of several ways: • Nearest neighbor (the worst) takes the closest texel to the one needed • Bilinear interpolation linearly interpolates between the four neighbors • Bicubic interpolation probably gives the best visual quality at greater computational expense (and is generally not directly supported) • Detail textures is another approach

  23. Minification • Minification is just as big of a problem (if not bigger) • Bilinear interpolation can work • But an onscreen pixel might be influenced by many more than just its four neighbors • We want to, if possible, have only a single texel per pixel • Main techniques: • Mipmapping • Summed-area tables • Anisotropic filtering

  24. Mipmapping in action • Typically a chain of mipmaps is created, each half the size of the previous • That's why cards like square power of 2 textures • Often the filtered version is made with a box filter, but better filters exist • The trick is figuring out which mipmap level to use • The level d can be computed based on the change in u relative to a change in x

  25. Trilinear filtering • One way to improve quality is to interpolate between u and vtexels from the nearest two d levels • Picking d can be affected by a level of detail bias term which may vary for the kind of texture being used

  26. Summed-area table • Sometimes we are magnifying in one axis of the texture and minifying in the other • Summed area tables are another method to reduce the resulting overblurring • It sums up the relevant pixels values in the texture • It works by precomputing all possible rectangles

  27. Anisotropic filtering • Summed area tables work poorly for non-rectangular projections into texture space • Modern hardware uses unconstrained anisotropic filtering • The shorter side of the projected area determines d, the mipmap index • The longer side of the projected area is a line of anisotropy • Multiple samples are taken along this line • Memory requirements are no greater than regular mipmapping

  28. Alpha Mapping • Alpha values allow for interesting effects • Decaling is when you apply a texture that is mostly transparent to a (usually already textured) surface • Cutouts can be used to give the impression of a much more complex underlying polygon • 1-bit alpha doesn't require sorting • Cutouts are not always convincing from every angle

  29. Bump Mapping

  30. Bump mapping • Bump mapping refers to a wide range of techniques designed to increase small scale detail • Most bump mapping is implemented per-pixel in the pixel shader • 3D effects of bump mapping are greater than textures alone, but less than full geometry

  31. Normal maps • The results are the same, but these kinds of deformations are usually stored in normal maps • Normal maps give the full 3-component normal change • Normal maps can be in world space (uncommon) • Only usable if the object never moves • Or object space • Requires the object only to undergo rigid body transforms • Or tangent space • Relative to the surface, can assume positive z • Lighting and the surface have to be in the same space to do shading • Filtering normal maps is tricky

  32. Parallax mapping • Bump mapping doesn't change what can be seen, just the normal • High enough bumps should block each other • Parallax mapping approximates the part of the image you should see by moving from the height back to the view vector and taking the value at that point • The final point used is:

  33. Relief mapping • The weakness of parallax mapping is that it can't tell where it first intersects the heightfield • Samples are made along the view vector into the heightfield • Three different research groups proposed the idea at the same time, all with slightly different techniques for doing the sampling • There is much active research here • Polygon boundaries are still flat in most models

  34. Heightfield texturing • Yet another possibility is to change vertex position based on texture values • Called displacement mapping • With the geometry shader, new vertices can be created on the fly • Occlusion, self-shadowing, and realistic outlines are possible and fast • Unfortunately, collision detection becomes more difficult

  35. Types of lights • Real light behaves consistently (but in a complex way) • For rendering purposes, we often divide light into categories that are easy to model • Directional lights (like the sun) • Omni lights (located at a point, but evenly illuminate in all directions) • Spotlights (located at a point and have intensity that varies with direction) • Textured lights (give light projections variety in shape or color)

  36. BRDFs

  37. BRDF theory • The bidirectional reflectance distribution function is a function that describes the difference between outgoing radiance and incoming irradiance • This function changes based on: • Wavelength • Angle of light to surface • Angle of viewer from surface • For point or directional lights, we do not need differentials and can write the BRDF:

  38. Revenge of the BRDF • The BRDF is supposed to account for all the light interactions we discussed in Chapter 5 (reflection and refraction) • We can see the similarity to the lighting equation from Chapter 5, now with a BRDF:

  39. Fresnel reflectance • Fresnel reflectance is an ideal mathematical description of how perfectly smooth materials reflect light • The angle of reflection is the same as the angle of incidence and can be computed: • The transmitted (visible) radiance Lt is based on the Fresnel reflectance and the angle of refraction of light into the material:

  40. External reflection • Reflectance is obviously dependent on angle • Perpendicular (0°) gives essentially the specular color of the material • Higher angles will become more reflective • The function RF(θi) is also dependent on material (and the light color)

  41. Snell's Law • The angle of refraction into the material is related to the angle of incidence and the refractive indexes of the materials below the interface and above the interface: • We can combine this identity with the previous equation:

  42. Area Lighting

  43. Area light sources • Area lights are complex • The book describes the 3D integration over a hemisphere of angles needed to properly quantify radiance • No lights in reality are point lights • All lights have an area that has some effect

  44. Ambient light • The simplest model of indirect light is ambient light • This is light that has a constant value • It doesn't change with direction • It doesn't change with distance • Without modeling occlusion (which usually ends up looking like shadows) ambient lighting can look very bad • We can add ambient lighting to our existing BRDF formulation with a constant term:

  45. Environment Mapping

  46. Environment mapping • A more complicated tool for area lighting is environment mapping (EM) • The key assumption of EM is that only direction matters • Light sources must be far away • The object does not reflect itself • In EM, we make a 2D table of the incoming radiance based on direction • Because the table is 2D, we can store it in an image

  47. EM algorithm • Steps: • Generate or load a 2D image representing the environment • For each pixel that contains a reflective object, compute the normal at the corresponding location on the surface • Compute the reflected view vector from the view vector and the normal • Use the reflected view vector to compute an index into the environment map • Use the texel for incoming radiance

  48. Sphere mapping • Imagine the environment is viewed through a perfectly reflective sphere • The resulting sphere map (also called a light probe) is what you'd see if you photographed such a sphere (like a Christmas ornament) • There sphere map has a basis giving its own coordinate system (h,u,f) • The image was generated by looking along the f axis, with h to the right and u up (all normalized)

  49. Cubic environmental mapping • Cubic environmental mapping is the most popular current method • Fast • Flexible • Take a camera, render a scene facing in all six directions • Generate six textures • For each point on the surface of the object you're rendering, map to the appropriate texel in the cube

  50. Pros and cons of cubic mapping • Pros • Fast, supported by hardware • View independent • Shader Model 4.0 can generate a cube map in a single pass with the geometry shader • Cons • It has better sampling uniformity than sphere maps, but not perfect (isocubes improve this) • Still requires high dynamic range textures (lots of memory) • Still only works for distant objects

More Related