/home/projects/CS600.460/bin
The default command (bad) is:
ac3d write_POV /tmp/ac3dpovfile.pov ; ac3d execute {povray +D +I/tmp/ac3dpovfile.pov &}You should change it to use a path under your home directory rather than /tmp.
Also, if you do not have /home/projects/CS600.460/bin in your path, you will need to use that full path in the above "povray" command (i.e. /home/projects/CS600.460/bin/povray).
If you don't have your own .ac3dprefs yet from running the program previously, you can copy mine at ~cohen/.ac3dprefs.
AC3D modeler home page
AC3D author (Andy
Coulbourne)
cc -I/usr/local/include -L/home/projects/CS600.460/lib/sgi_irix.n32 -o cube cube.c -lglut -lGLU -lGL -lXmu -lX11 -lmOpenGL Specification (Version 1.1)
OpenGL Reference Manual (the blue book)
SGI's OpenGL Site: pointers to tutorials and all sorts of interesting stuff
OpenGL.org Site: more tutorials, coding techniques, etc.
Paula Womack's Performance Techniques lecture
Dan Aliaga's SGI Infinite Reality
performance tips and Performance
Table (doesn't show well in our ghostview, but prints okay)
The video input to the HMD comes from the SGI, nameless. In its current configuration, nameless has its display set for 1280x1024 resolution. The top 1280x480 block is sent to the HMD as well as the main screen. Each eye gets a 640x480 block. You can set your application to bring up its graphics window in this exact location. You can even set up your window manager to bring up windows with a particular name string without any "decorations" or "borders" so that all you see is the image pixels. We may eventually rearrange this display configuration. If this happens, I'll post the new configuration.
The knobs on the top and back of the HMD allow you to tighten the HMD after you put it on and loosen it to remove it. You probably want it to fit fairly snugly on your head. You can adjust the field-of-view overlap with the two little knobs on the sides. Also, it's okay to wear the HMD while wearing glasses; they won't be scratched (but do not test this by making the HMD as absolutely tight as possible).
Again, turn off the power button when the HMD is not in use.
Sensor #0 is currently screwed onto the top of the HMD, and it will probably stay that way. Some time soon we will be attaching sensor #1 to the joystick device. It will probably be easily removable (using velcro or some such attachment) so you can possibly attach it to the custom device of your choice. (Getting buttons attached to a random device may be another matter entirely, but it simply depends on how resourceful you are.)
At present, the magnetic field appears to be extremely distorted, so
things look reasonable when you are standing directly under the tracker,
but things quickly begin to tip and scale in an odd way as you step away
from it. This may be due to the metal brackets the tracking source is
mounted to. Things work well enough for you to do your homeworks, and
they will hopefully be improved before you get too far into your term
projects.
Please peruse the VRPN documentation. Some of the high-level description is quite useful, though much of the documentation is just key exerpts from the header files.
/home/projects/CS600.460/bin/vrpn_server -f /home/projects/CS600.460/config/vrpn.cfgOnly one person may be connected to the server at a time (and I believe only one server may be running at a time). As soon as one client program dies, the next may connect to the same server process. (I think I modified the server last semester to always exit after the client dies because we were getting hung server processes that no one could connect to or kill.)
The joystick device plugs into the PC, which is running Windows NT. To read the button values of the joystick, you must run the button server on the PC. Log in to the PC using our class account. Click on the button server icon on the desktop to run the server.
Your client program can then read the button values by instantiating a vrpn_Button_Remote object with the string "Button0@cube.cs.jhu.edu". There are four readable buttons. You probably want to make sure the "turbo" switch on the joystick is turned off. When it is on, holding a button makes it toggle on and off repeatedly as long as you are holding. With it turned off (probably the way you want to use it), pushing a button triggers an event, as does releasing a button. Your registered callback function handles these events.
My sample application is available at:
http://www.cs.jhu.edu/~cohen/VW2000/Misc/hw2sample.tar.gz
First, collect create the tracking data file by running the vrpn_server program (on nameless) as usual, followed by the vrpn_recorder program. vrpn_recorder is a vrpn client application which reads the tracking sensor and write the results to a file called tracker.dat. You can quit recording by pressing <Control>-C. After quitting, you should probably rename the data file so you don't record over it later.
This file can now act as your tracker. Your application simply opens a different named tracker device to access this recorded data instead of the real tracker.
First, create your own config file with an entry as follows:
vrpn_Tracker_Canned Tracker0@machinename datafilenameNow, modify your application to instantiate the tracker using vrpn_Tracker_Remote("Tracker0@machinename").
(I haven't tried this yet, but you you may be able to set machinename to "localhost" so you can use it on various machines without changing your code or the config file).
When you are ready to run, run the vrpn_server program on your local machine (the one specified as "machinename"), using your new config file rather than the installed config file. Then run your client.
The client and server are now both running on your local machine, and the tracking data is read from the pre-recorded data file.
Because the vrpn_recorder program is not part of your application,
you can't see your model or any sort of hand icon while you are
recording. If you want to see what you are doing while you record, you
will have to build the data recording into your own application instead
of using the vrpn_recorder program. This is actually pretty easy. Check
out vrpn_recorder.C to see how this
is done.
In my sample application, I used primarily the q_xyz_quat_type. This structure is just a quaternion plus a translation vector. I compose a bunch of these, and finally convert to an OpenGL-style matrix (qogl_matrix_type) before putting it onto the matrix stack. Note that the xyz_quat type limits you to rotations and translations. If you need to store other sorts of rotations, you'll probably have to go with a matrix rotation for that portion of the transformation hierarchy./home/projects/CS600.460/src/quat
Recall that if we construct a transformation from coordinate system B to coordinate system A, the transformation, A_from_B, is actually designed such that it would move the coordinate axes of system A to align with those of system B, not vice versa. This can be a bit confusing, so be careful. (Have you read the Robinett and Holloway paper yet?)
Also, the quaternion library thinks of the OpenGL matrix type as a matrix for multiplying column vectors, because that is the order is stored in memory. So if you plan to compose these matrices together with the q_ogl_matrix_mult() command, you have to multiply in the reverse order:
A_to_C = A_to_B * A_to_C, rather than C_from_A = C_from_B * B_from_A.You could probably maintain the C_from_A notation by adding a new macro function as follows:
#define qogl_matrix_compose( A_from_C, A_from_B, B_from_C ) \Are you confused yet?
qogl_matrix_mult( (A_from_C), (B_from_C), (A_from_B) )
I tended to use the q_make() function to construct quaternions in my application. You specify an axis, and a rotation angle about an axis. It's possible to express any change in orientation this way. I actually ended up composing two quaternions about the coordinate axes so I was sure to get it right, but if I knew the correct axis of rotation, I could have constructed the correct quaternion directly. Of course, by looking at the quaternion resulting from my composition, I could determine the axis pretty easily. Quatlib also allows you to specify a quaternion by providing two vectors. The resulting rotation will be the one which aligns the first vector with the second and rotates about an axis perpendicular to both. This occasionally comes in handy.
Given two orientations specified as quaternions, quatlib allows you to perform a spherical linear interpolation between them. That is to say, it allows you to animate the rotation (interpolate between the two extremes) by rotating along the shortest path between the two quaternions. You provide the two quaternions and a parameter between 0 and 1, and the routine returns the resulting quaternion. It probably allows you to do extrapolation as well (by giving parameters outside the [0-1] range.
The standard quaternion reference is:
Shoemake, Ken. "Animating Rotation with Quaternion Curves". Proceedings of SIGGRAPH 85. pp. 245-254.
Using Quaternions to Represent Rotation