The sky sphere we use is shown below. A ground plane is also added for greater realism ( skysph2.pov ).
The gradient pattern and the transformation inside the pigment are the same as in the example in the previous section.
The color map consists of three colors. A bright, slightly yellowish red that is used for the sun, a darker red for the halo and a dark blue for the night sky. The sun's color covers only a very small portion of the sky sphere because we don't want the sun to become too big. The color is used at the color map values 0.000 and 0.002 to get a sharp contrast at value 0.002 (we don't want the sun to blend into the sky). The darker red color used for the halo blends into the dark blue sky color from value 0.002 to 0.200. All values above 0.200 will reveal the dark blue sky.
The rotate -135*x statement is used to rotate the sun and the complete sky sphere to its final position. Without this rotation the sun would be at 0 degrees, i.e. right below us.

Looking at the resulting image you'll see what impressive effects you can achieve with the sky sphere.

The sky sphere has one drawback as you might notice when looking at the final image ( skysph3.pov ). The sun doesn't emit any light and the clouds will not cast any shadows. If you want to have clouds that cast shadows you'll have to use a real, large sphere with an appropriate texture and a light source somewhere outside the sphere.
The usage of both fog types will be described in the next sections in detail.
The distance value determines the distance at which 36.8% of the background are still visible (for a more detailed explanation of how the fog is calculated read the reference section "Fog" ).
The fog color can be used to create anything from a pure white to a red, bloodish fog. You can also use a black fog to simulate the effect of a limited range of vision.
The following example will show you how to add fog to a simple scene ( fog1.pov ).

According to their distance the spheres in this scene more or less vanish in the greenish fog we used, as does the checkerboard plane.
Using as transmittance value of 0.2 as in
the fog's translucency never drops below 20% as you can see in the resulting image ( fog2.pov ).

The filter value determines the amount of light that is filtered by the fog. In our example 100% of the light passing through the fog will be filtered by the fog. If we had used a value of 0.7 only 70% of the light would have been filtered. The remaining 30% would have passed unfiltered.

You'll notice that the intensity of the objects in the fog is not only diminished due to the fog's color but that the colors are actually influenced by the fog. The red and especially the blue sphere got a green hue.

The tubulence keyword is used to specify the amount of turbulence used while the turb_depth value is used to move the point at which the turbulence value is calculated along the viewing ray. Values near zero move the point to the viewer while values near one move it to the intersection point (the default value is 0.5). This parameter can be used to avoid noise that may appear in the fog due to the turbulence (this normally happens at very far away intersecion points, especially if no intersection occurs, i. e. the background is hit). If this happens just lower the turb_depth value until the noise vanishes.
You should keep in mind that the actual density of the fog does not change. Only the distance-based attenuation value of the fog is modified by the turbulence value at a point along the viewing ray.
The following example ( fog5.pov ) uses a ground fog which has a constant density below y=25 (the center of the red sphere) and quickly falls off for increasing altitudes.

Just try the following example ( fog6.pov ).

You can combinate constant density fogs, ground fogs, filtering fogs, non-filtering fogs, fogs with a translucency threshold, etc.
In order to avoid this problem you have to make all those objects hollow by either making sure the camera is outside these objects (using the inverse keyword) or by adding the hollow to them (which is much easier).
The atmosphere model used in POV-Ray assumes a constant particle density everywhere except solid objects. If you want to create cloud like fogs or smoke you'll have to use the halo texturing feature described in section "Halos" .
Imagine a simple room with a window. Light falls through the window and is scattered by the dust particles in the air. You'll see beams of light coming from the window and shining on the floor.
We want to model this scene step by step. The following examples start with the room, the window and a spotlight somewhere outside the room. Currently there's no atmosphere to be able to verify if the lighting is correct ( atmos1.pov ).

The point light source is used to illuminate the room from inside without any interaction with the atmosphere. This is done by adding atmosphere off . We don't have to care about this light when we add the atmosphere later.
The spotlight is used with the atmospheric_attenuation keyword. This means that light coming from the spotlight will be diminished by the atmosphere.
The union object is used to model the room and the window. Since we use the difference between two boxes to model the room (the first two boxes in the difference statement) there is no need for setting the union hollow. If we are inside this room we actually will be outside the object (see also "Using Hollow Objects and Atmosphere" ).
The type keyword selects the type of atmospheric scattering we want to use. In this case we use the isotropic scattering that equally scatters light in all directions (see "Atmosphere" for more details about the different scattering types).
The samples keyword determines the number of samples used in accumulating the atmospheric effect. For every ray samples are taken along the ray to determine wether a sample is lit by a light source or not. If the sample is lit the amount of light scattered into the direction of the viewer is determined and added to the total intensity.
You can always start with an arbitrary number of samples. If the results do not fit your ideas you can increase the sampling rate to get better results. The problem of choosing a good sampling rate is the trade-off between a satisfying image and a fast rendering. A high sampling rate will almost always work but the rendering will also take a very long time. That's something to experiment with.
The distance keyword specifies the density of the atmosphere. It works in the same way as the distance parameter of the fog feature.
Last but not least will the scattering value determine the amount of light that is scattered by the particles (the remaining light is absorbed). As you'll later see this parameter is very useful in adjusting the overall brightness of the atmosphere.

Looking at the image created from the above scene you'll notice some very ugly anti-aliasing artefacts known as mach-bands. They are the result of a low sampling rate.
How this effect can be avoid is described in the following section.
The brute force approach is to increase the sampling rate until the artefacts vanish and you get a satisfying image. Though this will always work it is a bad idea because it is very time consuming. A better approach is to use jittering and anti-aliasing first. If both features don't help you'll have to increase the sampling rate.
Jittering moves each sample point by a small, random amount along the sampling direction. This helps to reduce regular features resulting from aliasing. There is (hardly) nothing more annyoing to the human visual system than the regular features resulting from a low sampling rate. It's much better to add some extra noise to the image by jittering the sample positions. The human eye is much more forgiving to that.
Use the jitter keyword followed by the amount of jittering you want to use. Good jittering values are up to 0.5, higher values result in too much noise.
You should be aware that jittering can not fix the artefacts introduced by a too low sampling rate. It can only make them less visible.
An additional and better way of reducing aliasing artefacts is to use (adaptive) super-sampling. This method casts additional samples where it is likely that they are needed. If the intensity between two adjactent samples differs too much additional samples are taken inbetween. This step is done recursively until a specified recursion level is reached or the sample get close to each other.
The aa_level and aa_threshold keywords are used to control the super-sampling. The aa_level keyword determines the maximum recursion level while the aa_threshold keyword specifies the maximum allowed difference between two sample before the super-sampling is done.
After all this theory we get back to our sample scene and add the appropriate keywords to use both jittering and supersamling ( atmos3.pov ).
A very low threshold value was choosen to super-sample even between adjactent points with a very similar intensity. The maximum recursion level of 4 will lead to a maximum of fifteen super-samples.
If you are looking at the results that you get after adding jittering and super-sampling you won't be satisfied. The only way of reducing the still visible artefacts is to increase the sampling rate by choosing a higher number of samples.

Doing this you'll get a good result showing (almost) no artefacts. Btw. the amount of dust floating around in this room may be a little bit exaggerated but it's just an example. And examples tend to be exaggerated.