The
RealSoft3D ray-tracing process - Part 1:
Introduction
By David Coombes
d.g.coombes@btopenworld.com
RealSoft3D provides perhaps the most powerful
raytracing engine available in any 3D rendering
package. It's open-ended design allows the
user access to the various stages of the
ray-tracing process to control a render's
appearance in an incredibly versatile manner.
This allows incredible photo realism, cartoon-style
shading, textured line-art, or any number
of new styles without the need for expensive
plugins or custom renderers.
To make the most of this incredible system
you really need to understand how RealSoft3D
generates images from the 3D data provided
through the scene modelling and how it fits
in with RealSoft3D's primary control mechanism,
the Visual Shader Language (VSL)
This document is designed to introduce
and explain RealSoft3D's processes. To help
with explanation I'll use as example a simple
scene consisting of a red sphere on a blue
and white chequered ground, with a white
wall behind. There is one light source from
the front and left of the scene. The camera
is placed to view the scene straight on
with a perspective view and square aspect
ratio.
|
|
The
scene, showing the camera's view as
dotted lines |
The
scene as rendered |
This document does not assume prior knowledge
of 3D computer graphics although it will
assume knowledge of basic VSL as covered
in the RealSoft3D manual. It will also make
use of standard computer-graphic terminology
such as pixels. Finally, some of the explanations
are not accurate in relation to the mathematical
mechanics of image generation or the exact
order of events, but are deliberately used
to aid understanding and help with visualizing
the process. I hope in this document I manage
to explain this such that you can really
tap the power provided by RealSoft3D!
Casting Rays
First things first, RealSoft3D
is a ray-tracer. This means that it
creates images by tracing rays. A
ray can be thought of as a straight
line emanating from a source and proceeding
until it hits an object. When tracing
an image, a ray is cast for each pixel
of the image. For example, let's render
our scene as a 10 by 10 pixel image.
For the first pixel in the top left
corner, a ray is cast from the camera
(see image on right ; the ray is the
green line). The angle this ray is
sent is determined by the camera's
properties (wide-angle lenses send
rays out in large angles, and telephoto
lenses send rays in a generally forward
direction). This ray intersects with
the white wall. The value of this
part of the wall's 'illumination'
is mid-grey, and so that is the colour
of the first pixel. The second
pixel in the first row is then traced.
Again a ray is cast, which hits the
white wall and returns a grey value
for the second pixel. The rest of
the top row of pixels are evaluated
in the same way, all returning the
same grey value.
After evaluating the top row, the
next rows are evaluated by the same
ray-tracing method, left-to-right,
top-to-bottom. On the first pixel
of the forth row, instead of the ray
intersecting with the wall, the ray
does not intersect with any object.
This returns nothing as a value, and
so the pixel is rendered black. The
image on the left shows the result
of this process when completed. You
can see where the ray has hit the
sphere the pixels are red, and the
various grey and blue of the checkered
floor where the traced ray intersects
it. |
The ray-tracing procedure subdivides
the scene into individual pixels |
Okay. So to produce an image we send rays
to "scan" a scene and report back
on the illumination; the amount of light
at that point in the scene. This "report"
uses RealSoft3D's incredible (both in versatility
and complexity!) channel system.
An Explanation of Channels
Every object in your scene, whether geometry,
lights or NURBS curves, has a collection
of properties. The most obvious is
colour, and there are others like
transparency and reflectivity.
This information is stored as numerical
data in channels. Channels may be
a misleading name; you can consider them
as just database fields in a record.
If you open up the properties for an object
and look under the "Col" tab,
you will see at the top the surface properties
as defined for that object. The default
attribute is Color. For the red sphere
the value in this field was set to (1,0,0).
We can therefore say that the sphere's Color
channel is (1,0,0). If you click the attribute
selector you can see a long list of different
properties, all of them referring to different
channels. The Transparency channel
for the sphere has a value of (0,0,0) -
there is no transparency. The Illumination
channel also has a value of (0,0,0). This
means that the amount of light originating
from the sphere is nothing.
Evaluating Surfaces
When we raytrace the image, we use the
information in the channels to build up
the picture. Most importantly, we use
the value of the Illumination channel
as this tells us how much light is coming
from the surface (whether this light be
reflected from another light source or being
radiated). The 10 by 10 pixel image shows
the surface's Illumination channel
at that point. But if the Illumination
channel for the sphere is (0,0,0), how come
it's rendered red? Which brings us to the
next important point!
Changing Channel Values
A fundamental aspect of channels is that
their information is not static. The
channel values are initialised at
the beginning of rendering but may change
either as a result of RealSoft3D's default
rendering calculations or as a result of
a VSL material that modifies this data.
The sphere's Illumination channel
is initialised to (0,0,0). But when
the camera ray is cast and terminates at
the sphere, another step is entered into. This
step evaluates the effect of light sources. It calculates the angle that rays
of light from the light source strike the
surface of the sphere. It uses this
angle to calculate how much light would
be reflected back towards the camera. And
finally, it multiplies this amount of light
by the Color of the surface being
evaluated. Here's the obligatory example.
In this diagram three light rays are marked,
A, B and C. The single lightsource
is of brightness 1.0 and colour (1,1,1).
This information, like all RealSoft3D data,
is stored in channels but I'll come back
to this later. Suffice for now, the
value of the light reaching the sphere is
white, (1,1,1).
At point A, 100% of the light is reflected
back to the camera. That means 100%
of (1,1,1) which of course is (1,1,1). This
is the value of the light's Illumination.
Multiplying this by the surface's Color
(1,0,0) gives: (1,0,0). This is the
value of the surface illumination at this
point and this is added to the surface's
Illumination channel. For point
B, about 50% of the light is reflected from
the surface towards the camera. 50%
of (1,1,1) is (0.5,0.5,0.5) which is multiplied
by the surface's Colour channel.
The result added to the surface's
Illumination channel is (0.5,0,0).
Finally, for point C, none of the
light is reflected back to the camera. So
the surface's Color channel is multiplied
by (0,0,0) to give (0,0,0), added to
the surface's Ilumination channel.
An important point to realise here that
there exist a record of channel values for
each and every ray cast. It is easy
to get confused into thinking that the Color
channel of an object is shared by the whole
of that object's surface; that there is
one instance of the Color channel.
The properties of the object specify
default values that will be copied
into the relevant channels when a ray is
traced.
Post Processing
This
evaluation of the scene by tracing rays
goes through one last step to produce the
final image - the Post Processing stage.
Although post-processing suggest manipulation
of the data, the first action of post-processing
is to turn the channel data into image data.
Before rendering a scene in RealSoft3D you
need to have in effect a Post Image.
There is always one active, usually the
default. The effects of the post-action
are applied at the end of a traced ray,
where it's first action is to copy the value
from the surface's Illumination
channel to the image's Color
channel. After this, further items within
the Post Image effect adjust the value in
the image's Color such as adjusting
the brightness, adding or changing values
due to the presence of particle effects,
blurring the Color channel, etc.
The value of this channel gives the
colour of the pixel in the final image.
Overview
Before going a little more in-depth into the process, let's
end this part with an overview of what's
been said so far. Consider the rendering
of the fourth line down in the 10 by 10
pixel image raytraced.
- For the first pixel, a ray
is cast. This ray does
not intersect with any geometry,
so the value in the surface's
Illumination channel
is left at (0,0,0). This
value is copied into the image's
Color channel for that
pixel.
- The next ray is cast which
intersects with the wall. The
Color channel of the
wall is (1,1,1), and 48.4% of
the light from the light source
is reflected back into the camera
lense. Therefore, 48.4%
of Color is added to
the surface's Illumination
channel, (0.484,0.484,0.484).
This value is copied to the
final image's Color channel.
- The third pixel is evaluated
to the same result as the second
pixel. The Illumination
channel for the image is also
(0.484,0.484,0.484).
- Same as the third pixel, the
Illumination channel
for this pixel is (0.484,0.484,0.484).
- The ray cast for the fifth
pixel intersects with the red
sphere, Color (1,0,0).
87.1% of the light is reflected
back towards the camera and
the Illumination channel
for this surface has (0.871,0,0)
added to it. At the post-image
stage, the Color channel
is a copy of this.
- The sixth ray hits the sphere
where the reflected light is
71%. The Illumination
channel has added to it (0.71,0,0),
and the image's Color
channel is a copy of this.
- The seventh ray finds the
sphere reflecting 45.3% of the
light, giving a final image
Color channel value of
(0.453,0,0)
- The ray cast for the eigth
pixel of this row finds the
sphere in it's own shadow. No
light is reaching this part
of the surface, so the Illumination
channel remains at (0,0,0) and
the final image Color
channel copies this value.
- The last two rays cast for
the ninth and tenth pixels terminate
at the wall. They are
evaluated to the same value
as the other wall pixels and
the Color channel for
the image for these two pixels
is (0.484,0.484,0.484)
At the end of tracing this row, the values
in the Color channel for the image
are:
Pixel |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
Red Component |
0 |
0.484 |
0.484 |
0.484 |
0.871 |
0.71 |
0.453 |
0 |
0.484 |
0.484 |
Green Component |
0 |
0.484 |
0.484 |
0.484 |
0 |
0 |
0 |
0 |
0.484 |
0.484 |
Blue Component |
0 |
0.484 |
0.484 |
0.484 |
0 |
0 |
0 |
0 |
0.484 |
0.484 |
In the compilation of the final image,
the colour of the pixels is calculated from
the image Color channel values. A
value of 1.0 means 100% which, in a 24 bit
colour image, means a value of 255. The
value of the Color channel is therefore
multiplied by 255 to get the resultant pixel
colour. For pixel 0, the resultant
pixel colour components are all 0.0*255
= (0,0,0). For pixel 1, they are all
0.484*255 = (123,123,123). For pixel
5, the red component is 0.871*255 = 222.
Green and Blue components are both
0, so the resultant pixel colour is (222,0,0)
This conversion from image Color
channel data to output image colour values
occurs when the output of the rendering
is a 24bit colour image. RealSoft3D can
output to greater colour depths in which
case the values of the Color channel
are multiplied by the 100% value for the
used colour depth. The final order of events
is:
Ray is
cast |
Ray
terminates when hits a surface. Surface
properties are initialised |
Lighting
calculations add degree of illumination
to surface's Illumination channel |
Post-effect
entered into. Surface's Illumination
channel copied to image's Color
channel |
Final
image. Pixel colour derived
from image's Illumination channel |
Summary
RealSoft3D stores surface properties in
channels. When rendering an image,
rays are cast into a scene, one ray
for each pixel. When these rays hit
a surface, calculations compute the amount
of light reflected back to the camera as
a product of light intensity and surface
colour, and this is added to the surface's
Illumination channel. The value in
this Illumination channel is copied
to the image's Color channel in the
beginning of a post image effect.
Finally, the value in the image's Color
channel is used to calculate the final colour
values of the rendered image.
Part
2 of "The RealSoft3D ray-tracing
process" will look in more depth at
the individual steps of RealSoft3D's rendering
process and introduce shaders. Click
HERE
to continute to Part 2.
|