Jack, H., Farmer-Haywood, M., "CS 488/533 Final Report on the Rubiks Cube Visualizer", University of Western Ontario, 1992
This report involves the discussion of a user interface and ray tracing package for the display of a Rubiks Cube. The package was produced using only low level graphics calls. All high level functions, from transformations to ray tracing were written into customized algorithms. The final package comes in four modules: the User Interface; the Ray Tracer; and the Quantizer and Display Utilities.
The user interface performs real time animation of the Rubiks cube in three dimensions, and prepares the data format required by the Ray Tracer. It has been constructed with a control loop which recognizes flags and toggles. This loop is constantly executed, so that the user may make input without stopping execution of the animation graphics.
The Ray Tracer has been constructed to run in batch mode. The scene is read from a file, and the image is processed, and written to another file. This allows the user interface to be used while the graphics are being processed. The ray tracer incorporates both reflection and refraction. A checkerboard floor is also included for increasing the visual impact of the image. All of the Ray Tracing properties may be set from the User Interface program.
The image output from the Ray Tracer produces RGB intensities for each pixel. This must be converted to a limited color map for most machines. Thus a program to quantize the colors in the file has been written. The results from this process are also written to a file, and a Program has been devised to load the image and display it on a Sunview based display.
The user interface pictured in Figure 1 incorporates all of the functions described under the specifications. (See Appendix A for a copy of the first report). The screen is divided into two major sections, Control at the top and Display at the bottom. The controls are slightly different from the first report, but the operation is almost identical.
ii) The cube control options (also on the right hand side are almost the same as before, except the rotation may be set to coarser increments for faster rotation. Also, the erase of the cube before redrawing can be turned off, but this seems to have little effect on the speed.
iii) In the centre of the control window is the side selector. This will allow different sides to be selected. If the user points to select a face of the cube (described later) this menu will update to the last choice.
iv) The left hand side has basic buttons for the control of the cube. Most are identical to previous options, but in review, we will give a quick summary. The reset button will reset all viewing, cube and color parameters. It does not reset the ray tracing parameters. The scramble cube button will begin scrambling the cube and change to say “stop scramble”, hitting this button again will stop the scrambling. The two buttons for rotating the side are the same as before. The change colors button will bring up a color edit window. The edit view button will bring up a menu for directly altering viewing parameters. The ray tracing button will bring up the ray tracing settings window. There is the necessary quit button, and finally a freeze button. When the freeze button is pressed, it will suspend all animation on the screen. The button will also change to say continue. Pressing the button again will resume action.
The color Window appears as in Figure 2. This window is essentially identical to the one presented in Appendix A. Basically the user will select the color of interest at the top of the window. The three sliders below will then show the color value for the item. The sliders may be used to alter the color. The bottom most window actually displays the colors. Two other buttons are available for quitting and resetting the colors. All color changes are made immediately.
View parameters may be adjusted with the sliders shown in Figure 3 (this figure is incomplete) to alter any of the drawing parameters. The use of the sliders should result in immediate update of the display screen. The window may be dismissed with the view window off button.
Finally, the ray tracer window (in Figure 4) is used for generating the ray tracing parameter file. The use of the different variables will be explained later, but most variables are controlled via sliders. The only things not slider controlled are the revolving menus for image size and maximum number of rays. At the very top of the window is a button for saving the image to a file. The file name is specified in the line below. The extension ‘.scene’ will be added to the file name. There is also a quit button at the top of the screen.
The data structures described in the first report (Appendix A) are still used, and the discussion in that report is valid for almost all aspects of the graphics. There are two major revisions which supercede the previous report. These are the addition of an animation control structure, and the introduction of buffer swapping.
A sophisticated approach has been developed to permit the control loop to perform stop/start animation. This involves a two tiered approach. The first tier will examine inputs and user requests (from mouse and keyboard) which are then used to set and clear internal control flags. The second tier uses these control flags to decide how to redraw the object. During execution, the program loops through these operations, and thus provides a separation between simultaneous control, and display motion. As a result the user may start scrambling the cube, then look at it from different angles while the cube is rotating, and be able to freeze the motion at any time. With this architecture it is also easy to add more ‘real time’ control functions.
The use of hierarchical data definition and storage begins with abstraction of the sub cubes, from the cube data structure, and the cube data structure from the real world coordinate, and finally the real world coordinates from the transformed screen display. This hierarchy is also demonstrated by the ability to use the cube display to derive a data set to drive the Ray Tracer.
The Ray tracer has a number of various features which are worthy of discussion. This section will attempt to cover each of the issues in enough depth to clarify the structure of the software. This section will begin first with a review of the models used. The section will then progress into discussion of advanced issues also to be handled within the software. The section will continue on to discuss the algorithms used.
The theory of Ray tracing has two basic concerns. Firstly the light path is of interest, secondly the light transmitted is also of interest. These both involve interactions with real objects, thus light-object interaction must also be covered in this section. Note that in most models discussed the geometric relations are not given. Almost all of these relations are given, derived from other sources, or easily obtained with dot products.
Point lighting is somewhat more sophisticated than Ambient lighting. In this case the direct light will illuminate the surface. The effect is approximated using the angle to the light source, and some constants (based on material properties) to estimate the ‘brightness’. Please note that this calculation is not done if the collision point lies in a shadow.
The highlights which often appear on an illuminated object can be estimated also. In this case the Phong model is used. This model includes an estimate of how ‘fuzzy’ the light patch appears. This is not done if the collision point lies in a shadow.
When light strikes a surface it is often reflected. The reflection model is quite simple. In this case a simple fraction of the incoming light will be used. More sophisticated models may be constructed using information about particular materials.
When light enters or exits material, its path changes. There are also optical adjustments based on the transparency of the specific material. The path change is determined by Snell’s law, and the change in optical properties are considered by a simplified formula.
After the collision vector has been calculated, the object’s transparency must be taken into account. This is done by using a transparency coefficient ‘t’. When ‘t’ has a value of 1 the object is opaque, and all light originates from the surface. If the object has a ‘t’ value of 0, then all light is from refraction.
The method for finding the intersection between a ray and a sphere has been covered in “An Introduction to Ray Tracing” by Glassner . Therefore, the method will only be reviewed briefly. The method is quite simple because the relationship may be explained explicitly. To state the description in the book:
A method for ray plane intersection was used to find collisions with the checkerboard. This method was devised from scratch. The proof will not be given, but it involves a parametric representation of the line, which is substituted into the plane equation. The parameter value was then found, and the collision point calculated. The algorithm for Line: Plane intersection is given below:
This algorithm is also based on an explicit relationship, therefore it is quite fast. In the best case there are 3 multiplies, 2 adds and 1 compare. In the worst case there are 12 multiplies, 1 divide, 10 adds/subtracts.
This function was simplified by aligning the checkerboard along the x-z plane. This avoided the problem of having to transform the point on the plane (thus speeding calculations). The only problem to be considered was how to map alternating squares onto the floor. This involved a calculation of a remainder value when dividing the x-z location by the grid spacing.
This structure involves performing as many calculations as possible before execution begins. Although not all calculations are done before, some tricks will be described below which greatly improve the speed of ray tracing.
The process of filling up the image has multiple approaches. If a simple image is needed, then a simple scan of all the pixels can be made. An anti-aliasing technique requires oversampling of the pixels. When a pixel is chosen, the pixel must be mapped to a ray in space. This ray may be considered on a purely geometrical basis.
When backtracking rays in space, they will either hit an object, or take on the background color. If an object has been hit by a light ray, it can either reflect or refract, and it will take on the optical properties of the surface. This leads to a tree data structure for the collisions.
As seen in Figure 15, the light rays may be followed, but the only colors (color explanation follows later) attached to any nodes in this tree are the diffuse lighting from ambient and point sources, and also the specular reflection. After the tree has been fully expanded, the light may be traced forward to the eye. Figure 16 shows the light being propagated back up the tree to the eye (screen pixel).
Both of these groups of data items have been ‘lumped’ together in separate data structures. As mentioned before the rays are organized as a linked list. The geometric information is stored in a second structure. By using this approach it makes it very easy to pass the various classes of information around the system, without pushing large numbers of variables on the stack.
There are a number of various implementation concerns. Most of these can be described as separate problems, which makes the implementation much simpler. Some interesting ideas arose in the development of efficiency improvement techniques. These were not only recalculation of values, but the pre-ordering, and pre-screening of processes.
Colors were all represented with a normalized scale in the program. This approach allows continuous color representation to grow and decrease, and be scaled at the end of the image processing. One drawback to this is the cost on increased accuracy in calculations and has likewise lead to an increase in calculation time (i.e. doubles instead of ints). Before the unquantized image is saved to an ASCII file the colors are converted to a 0 to 255 scale. It was decided to clip all color values outside the 0 to 255 range. If afterwards it was found that the maximum color was less than 95% of the maximum, the colors were scaled to make the most use of the display capabilities. Note: the problem of color quantization is covered in the next section
In an effort to speed computation, some tricks were devised, based on the geometry of the problem. In particular all of the balls were sorted by distance from an object. This was considered a good way to avoid checking all balls by ensuring that the closest balls are checked first. There are three cases of interest which are based on the object the light ray has come from:
Eye: The cube is assumed to be on the eye side of the floor, therefore the balls should all be checked first, starting from the closest and working towards the farthest. If no collisions have occurred, check for collisions with the floor, and quit if none found, the background has been hit.
The reader should note that the cases are based on a few geometrical properties. Firstly, if the balls are not touching the floor (and are visible) then any ray which hits them will not hit the floor. Also, because the balls do not overlap, they may be checked for collisions in order of depth. If one ball has a closer centre, it will never be overlapped by a ball further away. Thus, for each of the three cases listed above, all balls are sorted in order of depth from various targets, and when checking collisions, the above cases are applied (this seems to provide a great time efficiency by checking collisions in a random order).
Bounding test volumes were set up, but these have proven to be ineffective in most cases where the scene is filled, their only benefit is in a nearly empty scene, where they save a ray calculation for every pixel. There are two main bounding volumes. The first volume is constructed around the set of balls. The second is constructed to represent the half of the screen which the floor will be displayed in (this could be all or none).
Within the image there are certain objects which will be in shadows. In these cases there is a reduction in the amount of light. To detect this case a ray is traced between the light, and the point of interest (the only objects which will be hit here are spheres). If the light hits any spheres, the shadow is confirmed. When a shadow is confirmed, the specular reflection, and diffuse lighting from a point source are ignored.
To speed the checking process, the balls have been ordered by their distance from the light. The balls are checked in order, until one is hit, or the distance from the ball is greater than the distance between the point and the light.
In Figure 17 there are a set of balls ordered by depth from the light source. If we are looking for a shadow on any point of Ball 2, we first find a vector from the point to the light. The search then begins backward from the light. If the light strikes Ball 1, then Ball 2 is in its shadow. If the light misses it then Ball 2 is not in a shadow. Say we are finding a shadow on the floor, and the floor is known to be some distance (from the light) between Ball 1 (d1) and Ball 2 (d2). The algorithm would check for collisions with Balls 1 and 2. A check would reveal that the distance for d2 is greater than the distance to the floor, and since, no balls will have been hit, it may be concluded that the floor is not in a shadow, and the search is discontinued. This can save a great deal of search time when the light is close to the floor.
A problem with ray tracing is that features smaller than a pixel can be missed. Another problem is that features not aligned to the pixel grid become choppy. A method for solving this problem is Supersampling Anti-aliasing. This uses an approach where the pixel map is sampled normally, as well as between pixels, and the pixel values are combined.
The black points in Figure 18 are sampled, the four points around it are used to adjust the value. This will tend to give a local averaging effect which overcomes sudden discontinuities in image space.
The display window shown below allows the user to enter the file name of an image to be displayed. If the file contains the raw image from the Ray Tracer, the user must first process it by selecting Quantize Image File. The color quantization program produces an ASCII file with raster format headings. If the image is stored as (an ASCII) raster file, the user may select Display Image File and display the image directly to the screen.
The input file to the quantizer is an ASCII file with information on the image produced by the ray tracer rendering module. The first two numbers represent the width and the height of the image. The remaining width*height entries represent the RGB intensities for each pixel in the image. An RGB vector representing each of these intensities is stored in the array, pict. The color vectors are used to set the colormap and to obtain the indices into the colormap for the image colors. The ASCII file used in processing the RGB values is portable.
The output raster file from the quantizer is also an ASCII file for portability. It contains raster file headers which is needed for reading the file by Sunview to display the image. The header information is as follows:
The output file from the quantizer is used as the input file for the ray display unit. As the raster header information is read, it is checked to verify that the input file is indeed a raster file and to determine its type, and the type of colormap used: in this case, RGB.
After the header information has been read and checked, the colormap is set up. Values for the red intensities are read in first, then the green, and the blue. The remaining information in the file holds the indices into the colormap for each pixel in the image.
Rather than using the binary read and write file commands, which are extremely fast, we have used the text file commands which are slower. The binary file is compact but unreadable, whereas the ASCII file is readable and portable.
The algorithm used for color quantization is the octree quantization method given in Graphic Gems by Andrew Glassner. In this method the RGB color information is read sequentially from an input data file. The first k different colors are used as initial entries to the color table. If another color is added so that there are now k+1 colors, some very closely related colors are merged into one and their mean is used as the entry in the colormap. This procedure is repeated for each additional color.
The RGB cube can be represented by an octree of depth eight by recursive subdivision of the RGB cube. The RGB values (0: 255) are coordinates into the RGB cube. The bit pattern, corresponding to the same level in the octree as the bit position, can be used as the index for the branch into the octree.
The next step is to combine successors of an intermediate node to one leaf node if there are more than the number of colors permitted in the colormap. Closest neighbors are found by putting the position of each leaf node in the octree into a linear list in sorted order and then finding leaves belonging to the same parent node. This is done by masking out the bits below a given parent node level and then finding a parent node with two or more children.
A list is used to hold octree positions for the representative colors in the colormap. The list is sorted in ascending order to facilitate finding the most closely related colors when it becomes necessary to reduce the number of colors to the maximum size allowed in the colormap.
This assignment was an excellent conclusion to the course. It allowed an opportunity to work with almost all of the major topics in the last half of the course, from shading to pattern mapping. As can be seen the ray tracer has been nicely blended into the software package, using external routines to make the entire package much more friendly.
The entire package was produced in ‘C’. This was advantageous because of the tools available on the workstations for the graphics development. Sunview provided a number of tools for constructing high quality user interfaces. One drawback to Sunview is that under unix it cannot be allocated a time slice, therefore the animation may become choppy. Also, the slow nature of the processors hinders the speed of the ray tracer, although this is not a small problem.
This work was done in a versatile manner that should allow easy extension to a more flexible ray tracer. As for the project itself, it is very good, but I suggest including polyhedra. These would add another interesting dimension to the image. The possibility of doing simple ray traced images with the same package is also exciting. Some other possible additions to the ray tracer could be,