
As you see the whole object is lit by the light source. Now we can start to add some dust.

The result of halo32.pov is too bright. The dust is too thick and we can only see some parts of the object and no background.
We use a transmittance value of 0.7 to get a much thinner dust.

Beside the ugly aliasing artefacts the image looks much better. We can see the whole object and even the background is slightly visible ( halo33.pov ).
The jittering is used to add some randomness to the sampling points making the image look more noisy. This helps because the regular aliasing artefacts are more annoying than noise. A low jitter value is a good choice.
The super-sampling tries to detect fine features by taking additional samples in areas of high intensity changes. The threshold at which super-sampling is used and the maximum recursion level can be specified using the aa_threshold and aa_level keywords.
The approach that always works is to increase the overall sampling rate. Since this is also the slowest method you should always try to use the other methods first. If they don't suffice you'll have to increase the sampling rate.
We use the following halo to reduce the aliasing artefacts ( halo34.pov ).

The image looks much better now. There are hardly any aliasing artefacts left.
The same parameters we have used are discussed in the section about the atmosphere feature (see "The Atmosphere" for further explanations).
Another interesting way of getting an irregular disribution is to add some turbulence to the dust. This is done with the turbulence keyword followed by the amount of turbulence to use, like the following example shows ( halo35.pov ).

The image we now get looks much more interesting due to the shifts in the particle density.
You should note that we use a linear density function instead of the previous constant one. This is necessary because with a constant density function the density has the same value everywhere. Adding turbulence would have no effect because wherever the points are moved the density will have this same value. Only a non-constant density distribution makes sense when turbulence is added.
The fact that the turbulence value is actually a vector can be used to create effects like waterfalls by using a large turbulence value in on direction only (e.g. turbulence <0.2, 1, 0.2> ).
Use the following color map to get a partially filtering, red dust for example:
Some of the most common problems and pitfalls are described below in order to help you to avoid the most common problems.
What's wrong with this example? It's simply that a halo is used to describe the interior of an object and that you cannot describe this interior by describing how the surface of the object looks like. But that's what was done in the example above. Can you imagine what the interior of the sphere will look like? Will it be filled completey with the halo? Will there be areas filled by the halo and some filled by air? How will those areas look like?
You won't be able to tell the interior's properties from looking at the surface. It's just not possible. This should always be kept in mind.
If the above example was meant to create a sphere filled with a halo and covered with a checker board pattern that partially hid the halo you would have used the following syntax:
A halo is always applied to an object in the following way:
There's no halo allowed inside any pigment statement, color map, pigment map, texture map, material map, or whatever. You are not hindered to do this but you will not get what you want.
You can use a halo with a layered textures as long as you make sure that the halos are only attached to the lowest layer (this layer has to be partially transparent to see the halo of course).
If you want to add different halos you have to put all halos inside a single container object to make sure the halo is calculated correctly (see also "Multiple Halos" ).
You should also note non-overlapping, stacked halo containers are handled correctly. If you put a container object in front of another container object the halos are rendered correctly.
For a detailed explanation see "Empty and Solid Objects" .
Scaling the object before the halo statement will only scale the container object not the halo. This is useful if you want to avoid that the surface of the container object becomes visible due to the use of turbulence. As you've learned in the sections above particles may move out of the container object - where they are invisible - if turbulence is added. This only works for spherical and box mapping because the density fields described by the other mapping types don't have finite dimensions.
If the scale keyword is used after the halo statement both, the halo and the container object, are scaled. This is useful to scale the halo to your needs.
The halo keeps its appearance regardless of the transformations applied to the container object (after the halo), i.e. the halo's translucency, color and turbulence characteristics will not change.
The halo's appearance is independent from the sampling rate as long as there are enough samples to get a good estimate of what the halo really looks like. This means that one or two samples are hardly ever enough to determine the halo's appearance. As you increase the number of samples the halo will quickly approach its real appearance.
To put it in a nutshell, the halo will not change its appearance with the sample rate as long as you have a sufficient number of samples and no aliasing artefacts occur.
Whenever you add turbulence to a halo do not use the constant density function.
It is easy to assign a simple color or a complex color pattern to a virtual sky sphere. You can create anything from a cloud free, blue summer sky to a stormy, heavy clouded sky. Even starfields can easily be created.
You can use different kinds of fog to create foggy scenes. Multiple fog layers of different colors can add an eerie touch to your scene.
A much more realistic effect can be created by using an atmosphere, a constant fog that interacts with the light coming from light sources. Beams of light become visible and objects will cast shadows into the fog.
Last but not least you can add a rainbow to your scene.
The background color will be visible if a sky sphere is used and if some translucency remains after all sky sphere pigment layers are processed.
In the following examples we'll start with a very simple sky sphere that will get more and more complex as we add new features to it.
You may have noticed that the color of the sky varies with the angle to the earth's surface normal. If you look straight up the sky normally has a much deeper blue than it has at the horizon.
We want to model this effect using the sky sphere as shown in the scene below ( skysph1.pov ).
The interesting part is the sky sphere statement. It contains a pigment that describe the look of the sky sphere. We want to create a color gradient along the viewing angle measured against the earth's surface normal. Since the ray direction vector is used to calculate the pigment colors we have to use the y-gradient.
The scale and translate transformation are used to map the points derived from the direction vector to the right range. Without those transformations the pattern would be repeated twice on the sky sphere. The scale statement is used to avoid the repetition and the translate -1 statement moves the color at index zero to the bottom of the sky sphere (that's the point of the sky sphere you'll see if you look straight down).
After this transformation the color entry at position 0 will be at the bottom of the sky sphere, i. e. below us, and the color at position 1 will be at the top, i. e. above us.
The colors for all other positions are interpolated between those two colors as you can see in the resulting image.

If you want to start one of the colors at a specific angle you'll first have to convert the angle to a color map index. This is done by using the formula
color_map_index = (1 - cos(angle)) / 2
where the angle is measured against the negated earth's surface normal. This is the surface normal pointing towards the center of the earth. An angle of 0 degrees describes the point below us while an angle of 180 degrees represents the zenith.
In POV-Ray you first have to convert the degree value to radian values as it is shown in the following example.
This scene uses a color gradient that starts with a red color at 30 degrees and blends into the blue color at 120 degrees. Below 30 degrees everything is red while above 120 degrees all is blue.