Intro to Computer Graphics Projects

I am always trying to learn more about graphics programing. Previously, I have completed Ray Tracing in One Weekend and experimented with OpenGL. The University of Utah recently posted all the lectures and assignments from Introduction to Computer Graphics course. Over the course of about 3 to 4 months I watch all the lectures and completed the assignments. It was nice that the projects provided a skeleton to work from and achievable objectives, even if I didn’t get the feedback you would typically get from taking the class with the professor. Unfortunately, its been a few months since I completed all this work so my memory on the exact implementation details is thin. All projects were built with JavaScript and WebGL. You can check out all the completed projects for this course over on GitHub pages at https://zackthomas1.github.io/CS4600_IntroCompGraphics/

Project 1: Compositing Images

The first project was to create a basic web based image compositor. The program needed to include functionality for positioning image layers, basic blend modes, adjusting alpha channel value, and importing images from the user’s desktop.

Adjusting alpha values and importing images was handled using standard html/css elements to get input. Much like photoshop, changing the order of image layers in the side panel changes the order of blend operations, affecting which layer appears on top in the final composite.

I implemented 4 layer blend modes, which were normal, additive, difference, and multiply blending. The math for the blend modes was actually simple. It was a combination basic athematic operation, such as addition, subtraction, and multiplication, with a linear interpolation of the alpha value to adjust the opacity of each layer. In more “mathy” notion blend mode operations looked like ((alpha) * foregroundPixel) + ((1-alpha) * backgroundPixel).


By far the trickiest bit of functionality to implement was position image layers. To accomplish this the underlying array structure that defined color value at each pixel for an image needed to be indexed based on position data from mouse input. I run into lots of interesting buggy results, such as image layers stretching or partially disappearing. Issues were greatest in the literal edges cases when a layer was positioned partially outside the bounds of the image. The key insight to solve issues with positioning was that a pixel in a canvas with a width of “canvasWidth” and a height of ” canvasHeight” could be array indexed using the function index = (canvasWidth * y) + x, given the position (x,y).

Screen shot of web based compositor

Project 2: Transformations

The goal of this project was to create a drone “flight simulator” using WASD controls to pilot the drone. While project on the surface was silly, the underlying goal was to better understand 4×4 transformation matrices along with homogenous coordinate system. Instead of using the standard html/css methods for setting image transformations, matrices where used to calculate translation, rotation, and scale. Transformation matrices are bit unnecessary for 2D transformations, like the ones preformed in this project, but are crucial to understanding coordinate system transformation and are fundamental to graphics and rendering.

Project 3: Curves

The main purpose of this project was to learn about quadratic Bezier curves. This project also served as a gentle introduction to WedGL. The red curve is rendered through WebGL without CSS and as such are interactive. Here is how the Bezier spline is implemented in the vertex shader.

// Vertex Shader
var curvesVS = `
	attribute float t;
	uniform mat4 mvp;
	uniform vec2 p0;
	uniform vec2 p1;
	uniform vec2 p2;
	uniform vec2 p3;
	void main()
	{
		// [TO-DO] Replace the following with the proper vertex shader code
		float x = pow((1.0 - t),3.0) * p0.x + 
                        3.0 * pow((1.0 - t),2.0) * t * p1.x + 
                        3.0 * (1.0 - t) * pow(t,2.0) * p2.x +
                        pow(t,3.0) * p3.x;
		float y = pow((1.0 - t),3.0) * p0.y + 
                        3.0 * pow((1.0 - t),2.0) * t * p1.y + 
                        3.0 * (1.0 - t) * pow(t,2.0) * p2.y +
                        pow(t,3.0) * p3.y;
		gl_Position = mvp * vec4(x, y, 0, 1);
	}
`;
Quadratic Bezier Curve

Project 4: Triangle Meshes

This was the first project that really required a deep dive in WebGL. The goal was to create a web based 3D model viewer. I got the basic UI from the project skeleton. My main tasks were to implement a system for rotating and panning around the model using 4×4 transformation matrices, and the basic WebGL required for any WebGL/OpenGL project. This mainly involved setting the vertex buffer, vertex attributes, and uniforms. Most of the work involving WebGL calls in JavaScript is about setting states and filling buffers with data that the shader programs will then operate on. The vertex and mesh shaders for this program are basic is since everything is flat shaded.

Flat shaded model viewer

Project 5: Shading

The next project extends the previous one with an improved light and material model. Previously, models were flat shaded. By implementing a Phong shader changes to the scene light direction will affect the look and brightness of the model. The Phong material model, which allows for of diffuse and glossy reflections. In previous post I have discussed Phong shaders, So I won’t go into detail again. Its amazing how simple the basic concept is, and that you can get fairly reasonable looking results. A major short fall of Phong shading is that it is not physically based, meaning that one can adjust the parameters to produce physically impossible results, such as a material that reflects more energy than it receives.

Model with phong shading.

For anyone interested below is my phong mesh shader.

const meshFS = `
	precision mediump float;
	// input uniforms 
	uniform bool showTex;
	uniform vec3 lightDir;
	uniform vec3 lightColor;
	uniform vec3 specColor;
	uniform float lightIntensity;
	uniform float phongExpo;
	uniform sampler2D tex;
	// inputs from vertex shader
	varying vec2 v_texCoord; 
	varying vec3 v_viewNormal;
	varying vec4 v_viewFragPos; 
	void main(){
		vec4 diffuseColor = vec4(1.0); // Cr
		if(showTex){
			diffuseColor = texture2D(tex, v_texCoord);
		}else{
			diffuseColor =  vec4(1.0, 0.1, 0.1, 1.0);
		}
		// dot product between normalized normal and lightDir vectors results 
		// in cos(theta) where theta is the angle between the two vectors 
		float geometryTerm = max(0.0,dot(normalize(v_viewNormal), normalize(lightDir))); 
		vec4 lightingColor = lightIntensity * vec4(lightColor, 1.0); // Cl
		vec4 ambientColor =  lightIntensity * vec4(0.1,0.1,0.1,1.0); // Ca
		vec4 diffuseLighting = diffuseColor * (ambientColor + (lightingColor * geometryTerm));
		vec3 viewDir = normalize(vec3(v_viewFragPos) - vec3(0.0));
		vec3 halfAngle = normalize(lightDir + viewDir);
		float cosOmega = max(0.0, dot(halfAngle, normalize(v_viewNormal)));
		vec4 specularLighting = lightIntensity * vec4(specColor,1.0) *  vec4(lightColor, 1.0) * pow(cosOmega, phongExpo);
		gl_FragColor = diffuseLighting + specularLighting;
		// gl_FragColor = vec4(halfAngle,1.0);
		// gl_FragColor = vec4(v_viewNormal,1.0);
	}
`;

Project 6: Ray Tracing

This was really the project that I was look forward to when I started. The main objective is to implement real time ray tracing. An important distinction to make is that this is not full path tracing like what one would expect from a physically based renderer, such as Arnold. Instead, this project implements what is typically referred to as Whitted ray tracing. With Whitted ray tracing one is able to calculate perfect mirror reflections, but not rough reflections. I have previously talked about light reflection models in my post about The Ray Trace Challenge, so I won’t go into detail again. While I have done similar work previously I have never implemented in real time. A challenge is that all the models have to be passed to the shading program the . This challenge was side stepped by only using spheres, which have a straight forward quadratic representation. An array contain data about the radius and center of each individual sphere is passed to the shader and used to formulate a quadratic representation of each sphere for intersection testing. A fun extension is that with this structure is that any quadratic surface can be represented. Taking what I learned from my multivariable calculus class I was render an ellipsoid, paraboloid, and other surfaces.

Raster render no ray traced reflections
Ray traced Turner Whitted perfect mirror reflections

Project 7: Animation

The final project was to implement a physics based animation system. The projects uses the same UI and skeleton from the previous model viewer projects. The formal name of the system implemented here is a spring mass damper system. Basically, each vertex is an objected, with a defined mass, and the edges between vertices are springs, with a defined rigidity, that works opposite of compressive and tensile forces applied to the object. Using concepts from Newtonian physics numerous real world phenomenon can be simulated. In full disclosure, the physics system in this program breaks really easily. If the time steps are set to be too large the forces applied will be too large and the object will be blown apart into infinity. With a more serious approach such issues can be accounted and corrected for. Its hard to show the effects of a physics system in an image so I encourage you to check it out GitHub pages.