John Brosz's 453 (F2008) Web Site

Tutorial: T02 (TR 1600) in MS 239.
Lectures: L01 (MWF 1000) in EDC 280.
Contact: brosz at cpsc.ucalgary.ca
Office: None, email me.

What's up . . .
  • Dec 16 - I've finished marking the raytracers and have posted all the assignment marks. Please check them to ensure everything is correct. Also, if you have any questions about your A4 mark send me an email and I'll give you the details.
  • Dec 6 - Don't forget the Ray Tracing Contest! Find out more here.
    Also, please be very thorough in your submission of assignment 4. Your readme file should indicate what aspects of ray tracer is demonstrated in each submitted image as well as how the scene has been set up. To ensure you get full marks you should demonstrate problematic scenarios. Don't be afraid of submitting a lot of images. It wouldn't be a bad idea to keep images the images you generate as you test your add features and then submit these images. Here are some examples problematic scnearios that you might find useful:
    • Place a light source between two spheres. These demonstrates that you indeed have a point light source and confirms that shadows are working properly (making sure your shadow rays don't report shadows past the light source).
    • Place a light source two spheres in a line to one side of the light. This further away of the two spheres should be shadowed.
    • Demonstrate reflections between two objects in a manner that shows multiple levels (preferably 3 or 4) of reflection back and forth.
    • Make sure you show at least one specular highlight as well as scene with multiple light sources (and good lighting). Don't put the light source in the same position in every image.
  • Dec 4 - Two notes. The first is that the final review is at 4pm on Dec 8 in ST 126. The second is that a couple of people have had trouble with barycentric coordinates as described in the course notes. The problem seem to result from calculating the third coordinate by subtracting from the first two (i.e., s3 = 1 - s2 - s1). The easiest solution is to calculate s3 directly (s3 = .5 * |(a-p)X(b-p)| / s) and then test s1 > 0, s2 > 0, s3 > 0, s1 + s2 + s3 < 1.001 (the .001 fends off numerical instability problems).
  • Nov 25 - Assignment 3 handed back. Discussion of scene properties and initializing your rays.
  • Nov 18 - Demos for assignment 3.
  • Nov 13 - Last lab before assignment 3 is due. I have run out of things to cover so it will be a come and get help with your assignment sort of lab. Feel free to bring in questions about the bonus.
  • Nov 6 - I'll be covering some miscellaneous bits and pieces for assignment 3. Namely more details on smooth shading (i.e., calculating vertex normals), placing the model, translating with the mouse, and using QTimer (and its glut equivalent) for animation.
  • Nov 4 - Today we'll cover mesh data structures and OpenGL Lighting. Sample code: lab14_lighting.cpp. Assignment 2 was handed back.
  • Nov 3 - Midterm Review at 7pm in ENC 123. Expect it to last about an hour.
  • Oct 30 - Overview of A3 and discussion of md2 header file.
  • Oct 28 - Assignment 2 Demos.
  • Oct 23 - Today's lab is just 'ask for help' time. If you've got A2 done you're welcome to stay at home. If A2 is giving you some troubles please come to lab and pick my brain.
  • Oct 16 - Today in lab I'm going to go through a lot of stuff that will be useful for assignment 2. Highlights include QT's file dialogs, and how to render wireframe parametric surfaces. Also I've coded up a little program for you that will generate control points for your tensor product surfaces: surfaceGen.cpp.
  • Oct 15 - Two quick notes. First off all, beware the code listed in my post from Oct 10. I've fixed it now (changes in bold) but I had referred to GL_PROJECTION and GL_MODELVIEW instead of GL_PROJECTION_MATRIX and GL_MODELVIEW_MATRIX. This causes glGet to fetch junk and then gluUnProject produced wonky results. Secondly, sorry to those who left first on Tuesday, I had assignments to give back but forgot I had them until reminded. Please remind me on Thursday if I don't bring it up at the start of lab.
  • Oct 14 - I'll be handing back assignment 1. I'll also be discussing QT, selecting control points, and using multiple GL windows.
  • Oct 10 - Okay, yesterday I ran out of time to talk about add, moving, and deleting control points. Just like in the trackball example, the key point to doing this is converting screen coordinates (i.e., where the mouse is clicked) to another coordinate system (i.e., the coordinates you are drawing in).

    So, let's start off with the lame way of doing this. The lame way is to set your orthographic projection to something very close to screen coordinates and then do a minimal amount of converting. Here's an example:
    (Somewhere in your reshape function)
    glOrtho(0,screenWidth,0,screenHeight,-1,1);
    (Somewhere in your mouse event handler)
    points.push_back(Point(x,screenHeight - y));
    Okay, with glOrtho we set the projection so that in our OpenGL window the coordinates we see will vary from 0 to screenWidth in the x direction and 0 to screenHeight in the y direction. The second line adds a new Point to some sort of global point list. Our conversion is easy, x is the same, and y we just have to adjust for 0 being at the bottom instead of the top.
    So why is this lame? Well, consider what happens if you draw a bunch of points starting at the extreme left and end on the extreme right. If you then resize the OpenGL window to half its width, you only see the points that were on the left half of the screen, the other points remain hidden. The other more complicated reason is that you cannot be exactly sure of how big the coordinates these points might be in. For example, if I assume that everyone is using a 640X480 window then bad things may happen if someone is using a 1600X1200 window. It is better if we can scale the X and Y coordinates to ensure all the points are within (0,1).
    We just did this sort of thing in lab for the trackball example when we mapped screen coordinates to the hemisphere. So we can do something like:
    float pointX = mouseX / screenWidth;
    float pointY = (mouseY - screenHeight) / screenHeight;
    Note that we probably will want to re-rig our projection so that we show this range of our coordinates.
    glOrtho(0,1,0,1,-1,1);
    Okay, this is better but we still have a problem. What if we've allowed users to rotate, scale, and translate as some of you did with your fractals? Well, then after this step we need to find the inverse of whatever transformations you have put on the modelview matrix. We could work through this, but it's tedious and there is a really nice way of doing this all auto-magically. That is, through a function called gluUnProject. This is contained in the glu library so be aware the you may need to #include <GL/glu.h> and add -lGLU to your compilation command line. So, gluUnProject's purpose is to undo the viewport, projection, and modelview matrices. It transforms screen coordinates (i.e., mouse clicks) into world coordinates (the place you are drawing to . . . i.e., the coordinates you use for your glVertex calls). So here is an example of how to use this:
    // assume you have mouse coordinates mx & mz
    GLint viewport[4];
    GLdouble projMatrix[16], mvMatrix[16];
    glGetIntegerv(GL_VIEWPORT,viewport);
    glGetDoublev(GL_MODELVIEW_MATRIX,mvMatrix);
    glGetDoublev(GL_PROJECTION_MATRIX,projMatrix);
    GLdouble wx, wy, wz;
    gluUnProject(mx, screenHeight-y, 0, mvMatrix, projMatrix, viewport, &wx, &wy, &wz);
    // now wx & wy contain the x,y coordinates in worldspace of your point.
    Clear?
    My advice is to spend an hour making yourself a screen in which you can add 2D points to a global list of points using this sort of code. If you're ambitious draw lines between the points and add functionality so you can move and delete points (probably you want add, move, and delete modes in your program). Good luck and please email me any questions you have about this.
  • Oct 9 - Trackball and Point Picking. Example code: lab8_trackball.cpp.
  • Oct 7 - Demos for assignment 1, attendance is mandatory. If you cannot make it to lab for any reason, email me ASAP. All the demo involves is me having a computer ready with all of your submitted code. When your turn comes I'll get you compile and demonstrate your program and all of its features to me. When is your turn? I'll start a list on the white board and leave you to fill in your names as you come in. Essentially it will be first come, first demo.
  • Oct 2 - OpenGL transformations and display lists. Example code: lab6_transform.cpp.
  • Sept 30 - OpenGL double buffering, glut fonts, and makefiles. Example code: lab4_double.cpp and Example Makefile.
  • Sept 25 - 10 days until your assignment is due. I'm going to go into some details for your assignment so you can make some progress this weekend. I'm also going to cover some basic OpenGL material. Example code for today: lab3_square.c and lab3_fractal.cpp.
  • Sept 23 - Finished up glut discussion of the glut callback functions. Discussed general layout of an OpenGL program (main, init, paint, resize), and went over how to use glut popup menus.
  • Sept 18 - A run through of C++, then a look at look at basic GLUT functionality. Here is today's code showing off some GLUT functionality (don't worry about the OpenGL yet, we'll get to that next lab.
  • Sept 16 - The first lab. General description of the course and its assignments. I'll be checking to see how much you know about C/C++, etc. Then I'll start introducing you to OpenGL and GLUT. Here is the first piece of code I'll give you.
Lab Notes
  • I've posted my notes I teach from here. I'll post them as I have opportunity (and time) to scan them in. Be aware that these are my notes to myself and feature poor handwriting, cryptic reminders, and skip things that I improvise.
Assignments
  • Current Assignment Marks can be found here
  • Assignment 1 - Fractals
    • Due Oct 5 @ 11:59 pm
    • Here are instructions for submitting your assignment. Please compress and tar your files before submitting. If your assignment was in a directory called a1 you'd tar the files by typing tar -cf a1.tar a1 and then compress the produced tar file with gzip a1.tar. Then you submit the resulting file.
    • Note that you are requested to produce a readme file in pdf format. To do this Luke (the other TA) recommends using PDFCreator; a program that lets you install a virtual printer that allows you to create a PDF from any program with print functionality. On Linux machines you can do this using OpenOffice. Simply use the print to file option to print your document. This produces a ps file. Then you can convert this to a pdf on the command line by typing ps2pdf <filename.ps>.
    • Demos will be in lab Oct 7. You must attend and demo your program to me. If you cannot attend this lab for some reason, send me an email before Oct 7 and we'll work something out.

  • Assignment 2 - Modeling Curves & Surfaces
    • Due Oct 26 @ 11:59 pm
    • Please be ensure to include instructions on how to compile your programs in your readme documents.
    • Demos will be in lab Oct 28.

  • Assignment 3 - MD2 Viewer
    • Due Nov 16 @ 11:59 pm
    • Demos will be in lab Nov 18.

  • Assignment 4 - Ray Tracing
    • Due Dec 7 @ 11:59 pm
    • If you are having troubles with ppm files, it may be due to a lack of a space between the magic number 'P6' and the image width. You can fix this yourself by adding this space in pixelmap.cpp (near the end of file, search for P6) or you can get my fixed copy of this file here.
    • You can view ppm files in gimp on Linux machines. You used to be able to convert ppm to jpg with a little program called xv (e.g., xv test.ppm test.jpg) but I'm not sure this program is currently installed.
    • We're having a contest for the best ray-traced image. Check out the images from my 2003 lab here. Dr Brian Wyvill used to run a similar competition, see some of his submisssions here and here.
Course Organization Links Other Useful Links