• This method no longer uses polygons, but relies directly on the geometrical model.
• The lines of light which make up the pixels of the picture, are back traced, and followed through reflection, refraction, and ambient light.
• This method generates a horrendous number of calculations.
• The figure on the next page shows how ray tracing is done for the first bounce, many other bounces may be performed for more realism.
- Realistic texture and appearances are easy to use
- This is the only display method suited to rendering curved surfaces
- The images give photograph quality
- Incredibly Slow due to large number of calculations
- Requires a very powerful computer
• Support technology is being produced at a tremendous rate.
• Currently used for producing pictures and videos for sales marketing, etc. Many commercials on TV are produced with this method. Used in movies, like Terminator2, the Abyss, etc.
• The theory of ray tracing has two basic concerns. Firstly, the light path is of interest, secondly the light transmitted is also of interest. These both involve interactions with real objects, thus light-object interaction must also be covered in this section. Note that in most models discussed the geometric relations are not given. Almost all of these relations are given, derived from other sources, or easily obtained with dot products.
• Ambient light is assumed to be evenly distributed over the entire volume of space. This naturally occurring light will provide a basic level of illumination for all objects.
• Point lighting is somewhat more sophisticated than Ambient lighting. In this case the direct light will illuminate the surface. The effect is approximated using the angle to the light source, and some constants (based on material properties) to estimate the ‘brightness’. Please note that this calculation is not done if the collision point lies in a shadow.
• The method for finding the intersection between a ray and a sphere has been covered in “An Introduction to Ray Tracing” by Glassner [1989]. Therefore, the method will only be reviewed briefly. The method is quite simple because the relationship may be explained explicitly. To state the description in the book:
i) Find distance squared between ray origin and centre,
ii) Calculate ray distance which is closest to centre,
iii) Test if ray is outside and points away from sphere,
iv) Find square of half chord intersection distance,
v) Test if square is negative,
vi) Calculate intersection distance,
viii) Calculate normal at point.
• This algorithm gives at worst 16 additions, 13 multiples, 1 root, and 3 compares.
• A method for ray plane intersection was used to find collisions with the checkerboard. This method was devised from scratch. The proof will not be given, but it involves a parametric representation of the line, which is substituted into the plane equation. The parameter value was then found, and the collision point calculated. The algorithm for Line - Plane intersection is given below:
i) The dot product between I and N is found,
ii) If the dot product is greater than, or equal to, zero then the line does not intersect the plane,
iii) The line-plane intersection parameter is calculated (the dot product may be used here),
iv) The intersection point is calculated.
• This algorithm is also based on an explicit relationship, therefore it is quite fast. In the best case there are 3 multiplies, 2 adds and 1 compare. In the worst case there are 12 multiplies, 1 divide, 10 adds/subtracts.
• When we are rendering images we quite often will try to add a texture or pattern to the surface. This can be done by having a separate splines model of the object. While rendering the parametric model is interrogated as each surface point is being rendered.
• A simple example is the mapping of a checkerboard pattern on a plane using a function.
• This ray tracer will backtrack rays, and follow their paths through space. This involves a basic set of functions. First the general algorithm description is given for ray tracing, in general,
Basic Order of Higher Level Ray Tracer Features
• This structure involves performing as many calculations as possible before execution begins. Although not all calculations are done before, some tricks will be described below which greatly improve the speed of ray tracing.
• The process of filling up the image has multiple approaches. If a simple image is needed, then a simple scan of all the pixels can be made. An anti-aliasing technique requires oversampling of the pixels. When a pixel is chosen, the pixel must be mapped to a ray in space. This ray may be considered on a purely geometrical basis.
• When backtracking rays in space, they will either hit an object, or take on the background color. If an object has been hit by a light ray, it can either reflect or refract, and it will take on the optical properties of the surface. This leads to a tree data structure for the collisions.
The Basic Concept of Back Tracing Rays
• As seen in Figure above, the light rays may be followed, but the only colors (color explanation follows later) attached to any nodes in this tree are the diffuse lighting from ambient and point sources, and also the specular reflection. After the tree has been fully expanded, the light may be traced forward to the eye. The figure shows the light being propagated back up the tree to the eye (screen pixel).
The Basic Concept of Back Tracing Light Values in a Ray Tree
• This structure gives rise to a tree based on a linked list. Each node in the linked list has:
• Surface normals, and optical properties for refraction (used for backtracking),
• The lighting from diffuse and specular sources,
• Vectors for before and after reflection and refraction,
• Positions for ray origins and collision points,
• The depth in the search tree,
• The identification of which object the ray has come from and has hit.
• All of these properties are used in a dynamic fashion, and the list may be grown until all branches have been terminate (i.e. hit background) or are at the maximum search depth.
• This Ray tree structure is constructed with the data stored in a second structure. The second structure for scene data contains geometric information:
• Objects for each ball including centers, radii, and ordered lists of the closest ball,
• An object for the floor, eye, and light, including positions, orientations, and a list of balls, ordered from closest to farthest,
•The same perspective parameters used in the user interface,
•A list of picture areas only filled with background light,
•Optical properties for balls, and floor: index of refraction, and shininess, transparency, diffuse, Phong, and specular constants,
• Precalculated values for use in the program.
• Both of these groups of data items have been ‘lumped’ together in separate data structures. As mentioned before the rays are organized as a linked list. The geometric information is stored in a second structure. By using this approach it makes it very easy to pass the various classes of information around the system, without pushing large numbers of variables on the stack.
• Bounding test volumes were set up, but these have proven to be ineffective in most cases where the scene is filled, their only benefit is in a nearly empty scene, where they save a ray calculation for every pixel. There are two main bounding volumes. The first volume is constructed around the set of balls. The second is constructed to represent the half of the screen which the floor will be displayed in (this could be all or none).
• Within the image there are certain objects which will be in shadows. In these cases there is a reduction in the amount of light. To detect this case a ray is traced between the light, and the point of interest (the only objects which will be hit here are spheres). If the light hits any spheres, the shadow is confirmed. When a shadow is confirmed, the specular reflection, and diffuse lighting from a point source are ignored.
• To speed the checking process, the balls have been ordered by their distance from the light. The balls are checked in order, until one is hit, or the distance from the ball is greater than the distance between the point and the light.
• In the figure above there are a set of balls ordered by depth from the light source. If we are looking for a shadow on any point of Ball 2, we first find a vector from the point to the light. The search then begins backward from the light. If the light strikes Ball 1, then Ball 2 is in its shadow. If the light misses it then Ball 2 is not in a shadow. Say we are finding a shadow on the floor, and the floor is known to be some distance (from the light) between Ball 1 (d1) and Ball 2 (d2). The algorithm would check for collisions with Balls 1 and 2. A check would reveal that the distance for d2 is greater than the distance to the floor, and since, no balls will have been hit, it may be concluded that the floor is not in a shadow, and the search is discontinued. This can save a great deal of search time when the light is close to the floor.
• A problem with ray tracing is that features smaller than a pixel can be missed. Another problem is that features not aligned to the pixel grid become choppy. A method for solving this problem is Supersampling Anti-aliasing. This uses an approach where the pixel map is sampled normally, as well as between pixels, and the pixel values are combined.
Sample Points for Anti-aliasing
The black points in Figure 18 are sampled, the four points around it are used to adjust the value. This will tend to give a local averaging effect which overcomes sudden discontinuities in image space.