Theo Verelst Graphics Page

by M.T. Verelst

Some results from my graphics experiments/developments. I worked in the area of graphics at Delft University, some time ago, and I graduated as EE (network theory) with a graphics subject. Obviously, I like graphics, and modern computers are getting more and more capable of rendering serious computer graphics scenes which were only a dream in the time of Foley and van Dam, but actually getting those graphics to appear on your screen (or beamer or printer or webpage) is still a challenge. Using exisiting software is not easy for average persons, and even with commercial and classical packages (Maya, Radiance, SM packages) it is not trivial to arrive at photorealistic renderings. So that is a nice challenge! Another major challenge is to get the powerfull new hardware which is available to the general public to actually show some stuff, which of course can be done by buying a hefty PC with a strong PCI-X graphics card and DOOM-3 or so, but that is not all there is to graphics. Looking at early and more recent classical graphics material tells more than what gamers are used to.

Don't forget to check out my SIGGRAPH page.

Hardware

I've started some work I sent in to Siggraph recently, see this page .

Cygwin

To use a recent NVidia card (a 6700 with pentium-D 3GHz and 945 chipset) under windows, where most demo sources are for it seems, I recently tried and after a short while abondoned  the Micrsoft Visual C (free) environment. What a mess and iinnccrreeddiibblle file set to get something going... I wiped the whole VC and SDK from disk and started afresh with for me well known Cygwin, a unix (linux) -like environment with Gnu compiler for windows (in this case xp). With a little work I got some essential examples to compile and work at competitive speed with the downloadable executables, using mingw the cygwin.dll compatibility problem can be avoided, the examples don't require unix calls.

I made a working makefile for the simple_soft_shadows example from the Nvidia SDK with cG example code, see the example browser.


# Makefile for nvidia opengl sdk example to compile under Cygwin
# c program files nvidia or so, assuming sdk and cg thereunder
nvidiadir=../../../../../
#ccopts:= -mno-cygwin -O3
ccopts:= -mno-cygwin
libdirs:= -L $(nvidiadir)/SDK\ 9.5/LIBS/lib/Release \
-L $(nvidiadir)/Cg/lib \
/usr/lib/mingw/libcoldname.a
incs1:= -I $(nvidiadir)/SDK\ 9.5/inc/ -I $(nvidiadir)/SDK\ 9.5/DEMOS/OpenGL/inc \
-I $(nvidiadir)/SDK\ 9.5/LIBS/inc/ -I $(nvidiadir)/SDK\ 9.5/Cg/include
incs2:= -I $(nvidiadir)/SDK\ 9.5/DEMOS/OpenGL/inc -I $(nvidiadir)/SDK\ 9.5/inc/ \
-I $(nvidiadir)/SDK\ 9.5/LIBS/inc/

all: simple_soft_shadows.exe

simple_soft_shadows: simple_soft_shadows.exe

simple_soft_shadows.exe: nv_png.o read_text_file.o pbuffer.o data_path.o \
array_texture.o simple_soft_shadows.o ppm.o
g++ $(ccopts) -o simple_soft_shadows.exe \
nv_png.o read_text_file.o pbuffer.o data_path.o \
array_texture.o simple_soft_shadows.o ppm.o \
$(libdirs) \
-lpngMTDLL -lzlib -lglut32 -lglu32 -lopengl32 -lgdi32 -lcg -lm

simple_soft_shadows.o: simple_soft_shadows.cpp
g++ $(ccopts) $(incs2) -c simple_soft_shadows.cpp

nv_png.o: ../shared/nv_png.cpp
g++ $(ccopts) $(incs1) -c ../shared/nv_png.cpp

array_texture.o: ../shared/array_texture.cpp
g++ $(ccopts) $(incs1) -c ../shared/array_texture.cpp

data_path.o: ../shared/data_path.cpp
g++ $(ccopts) $(incs1) -c ../shared/data_path.cpp

pbuffer.o: ../shared/pbuffer.cpp
g++ $(ccopts) $(incs1) -c ../shared/pbuffer.cpp

read_text_file.o: ../shared/read_text_file.cpp
g++ $(ccopts) $(incs1) -c ../shared/read_text_file.cpp

ppm.o: ppm.c
g++ $(ccopts) -c ppm.c

clean:
rm -f *.o core

realclean:
rm -f *.o simple_soft_shadows.exe core

I wrote and added the ppm image code, thats not part of the original example, and I added (to the main C++ source simple_soft_shadows.cpp) saving of the graphics images by pressing the space bar. See below for source and further below for use of the same type of addition to make a mpeg movie of a OpenGL/Shader graphics animation.

 

In this case the image appeared upside down because the line order had to be reversed, which is also done in the source code, available in this zip file. I don't guarantee it compiles right out of the box, but with me it checked to compile with only a list of warnings from the linker about an old float to long conversion switch, which is windows specific, and which I had to hack a few lines of conversion code for in the code (check google for ftol2). I've also added a command line option to specify another texture (in png format).

You need the examples from Nvidia and also the cG compiler and devkit or what they call it, also to run the program: it reads shaders and images from the source tree of those, but then you can compile also recent shaders and serious up to data OpenGL graphics programs yourself, and with the optimizer on even at compatitive speed.

 

The examples below (under linux) can also be compiled with cygwin! The use of cjpeg is a lot slower in practice when a doswindow-less version is compiled (I didn't check the timing for a bash command line started version) because each system() call takes about half a second to start up. On linux I can almost do live (30fps or so, using a ramdisk) rendering with per frame conversion instead...

The below examples can be compiled under cygwin  by:
   gcc -mno-cygwin -I ../../include/ -o cgGL_vertex_example2.exe cgGL_vertex_example.c \
-L ../../lib/ -lglut32 -lglu32 -lopengl32 -lgdi32 -lm -lcg -lcgGL -Wl,--subsystem,windows
assuming it is in the SDK file tree with cg installed under the same main dir.

Linux

It is nice to be able to run a professional enough UNIX variation like Fedora Linux, because the development times and tool efficiency is almost uncomparable with the windows environment, though I'm sure ones'  milage may vary, and of course taste is hard to discuss. Long ago I gave some Unix courses and had become accustomed to working at pro unix workstations because I liked that at university, which of course is worth it: almost everything (say: everything) from windows and DOS comes from that world and usually not much in improved form (say: worse).

So I was quite interested to run the recent free (no quotes) Linux variation FC4, which is serving you this page, and liked to also run graphics programs on it, and better yet: compile my own. Luckily thanks to the widespread use of OpenGL to drive graphics cards, and the work of people to make an open source version available (in fact in the past I compiled (not wrote..) my own on cygwin, too, when that was an issue), pretty good use of OpenGL is possible on Linux, and I tried out that also shaders can be used, like cG which enables programming of the powerfull shader in cards like the Nvidia 6000/7000 series.

Because of the availability of cjpeg en ffmpeg tools I had the idea to enable myself to do my own computer animations by storing images from self compiled (card accelerated) graphics programs in jpeg en mpeg movie formats for fluent playback and web viewing or storage and presentation.

So I compiled one of the Nvidia (maybe taken from an opengl example) examples and added ppm image saving, and jpeg and ffmpeg coding (as system calls), and even added reading per-frame different textures from ppm files (converted on the fly from a sequence of jpegs), which can be generated automatically from a movie file using ffmpeg. I didn't change much about the opengl code, its just a simple sphere (with more polygons than in the original) with a polar (u,v)-ed texture on it, except I didn't use a checkered generated but image based texture, from a (uncompressed) ppm image file. Also, the program runs for a specified number of frames, at each frame making a new ppm file from a numbered sequence of jpegs, and generating increasingly numbered output jpeg images.

The result, except for the hereunder shown web server scripts, is for the moment available as Open Source software. To compile and run it is needed to have the cG SDK installed, which cannot be yum-ed but it is available as a rpm from Nvidia for 64 systems, which worked for me, also mind the path variables to the library.

The first simple example showing a vertex shaded qube looks like this

 
and outputs its compiled shader code:
~/Opengl/runtime_ogl [1085] $ ./cgGL_vertex_example
LAST LISTING----(null)----
---- PROGRAM BEGIN ----
!!VP1.1
# NV_vertex_program generated by NVIDIA Cg compiler
# cgc version 1.4.0000, build date Sep 26 2005 22:19:13
# command line args: -q -profile vp20 -entry main
# source file: cgGL_vertex_example.cg
# nv30vp backend compiling 'main' program
#vendor NVIDIA Corporation
#version 1.0.02
#profile vp20
#program main
#semantic main.Kd
#semantic main.ModelViewProj
#var float4 Kd :  : c[4] : 1 : 1
#var float4x4 ModelViewProj :  : c[0], 4 : 2 : 1
#var float4 IN.position : $vin.POSITION : ATTR0 : 0 : 1
#var float3 IN.normal : $vin.NORMAL : ATTR2 : 0 : 0
#var float3 IN.color : $vin.DIFFUSE : ATTR3 : 0 : 0
#var float3 IN.TestColor : $vin.SPECULAR : ATTR4 : 0 : 1
#var float4 main.HPOS : $vout.POSITION : HPOS : -1 : 1
#var float4 main.COL0 : $vout.COLOR0 : COL0 : -1 : 1
#const c[5] = 1 0 0 0
        DP4 o[HPOS].x, c[0], v[0];
        DP4 o[HPOS].y, c[1], v[0];
        DP4 o[HPOS].z, c[2], v[0];
        DP4 o[HPOS].w, c[3], v[0];
        MUL o[COL0].xyz, c[4].xyzx, v[4].xyzx;
        MOV o[COL0].w, c[5].x;
END
# 6 instructions
# 0 temp registers
---- PROGRAM END ----
At least it looks like decent assembly code, but I didn't spent much time yet analysing it, but it is quite a little supercomputer in the heavier cards.

The other example is the animation example with added code, see below for an example output, it is this code which is used for the advanced (still under test) web server example. The example from the SDK is called runtime_ogl_vertex_fragment which can on Linux be compiled by:
    g++ -o demo demo.cpp -L /usr/lib64  -lglut -lCg -lCgGL -lpthread -lm
assuming Cg is installed. The source of the adapted example is here. After compiling, make sure it can find a sufficient large set of imput images which it will read in as movie-texture, which is here in the code, and make a movie of the resulting jpeg images:

   ffmpeg -i test.avi -me full -cropleft 128 -cropright 128 -croptop 32 -cropbottom 32 \
-deinterlace -s 512x512 -aspect 1.0 -b 15000 -t 15 -y test.mp4
   mplayer -nosound -ss 00:05.0 -frames 361 -vo jpeg:quality=90:maxfiles=1000 test.mp4
   ./demowebani
   exec ffmpeg -i imageout%d.jpg -b 5000  -y animation.mp4

You need to have cjpeg, mplayer and ffmpeg installed, on cygwin, cjpeg is part of the disrtribution, I've compiled ffmpeg myself, though maybe it can be found pre-compiled. Fedora Core4-64 has mplayer and ffmpeg as yum-installable. The resulting movie in animation.mp4 contains an animation of the input movie mapped on the rotating sphere with shader effect.

Webserver Rendering

This is what I'm working on right now, amoung other things, to make web server (in this case an apache 2 server running on Athlon 3G-64, redhat Fedora Core4-64bit) which allows surfers to make their own 3D images and even movies which can be generated automatically in interaction time from web pages. This movie is an example of a result from feeding a 10 sec or so mpeg movie into a textured graphics example, rendered with a cG shader using NVidia 6200 graphics card acceleration


  click here to download mpeg4 movie

This is a still from the short movie (made in france) I made which served as the moving texture which was mapped on the above sphere:

 

The whole result movie of the movie projected as a texture with shader (the glare) rendered into mp4 compressed (live-downloadable) format took about 3 times real time in this case (about half a minute for 10 sec). The movie compression doesn't run in parallel with the rendering, and there are intermediate stages of ppm images and jpeg images, so there is room for speeding rendering up. The target is clear: a real time downloadable movie created (close to) real time by the server!

When the (experimental) server is in a state where it can render examples (should be the case most of the time currently) this example should work 'live' though at the moment there are no paramters to adjust, so it makes no more sense than a staticly served images, nor does it hard-proof  much, necessarily:

 

Tlhis link starts the program and feeds the graphics card output back to your browser : http://www.theover.org/cgi-bin/gr.tcl .

Note that such result may well (in this case even over the web!) be much faster than a similar rendering with lets say POVRay or so because of the use of the graphics card. I need now to make parameters (maybe shaders) programmable, and possibly provide an upink possibility for models and textures.

The graphics software and build files will be Open Source / Free software. The server scripts are not. For inquiries, see below email addresses, they are actively used.


home page [2]       email: theover@tiscali.nl [2]   Refer to this page by: http://www.theover.org/Graphics