Section 4.9.2.2
Adding the Sun

In the following example we will create a sky with a red sun surrounded by a red color halo that blends into the dark blue night sky. We'll do this using only the sky sphere feature.

The sky sphere we use is shown below. A ground plane is also added for greater realism ( skysph2.pov ).

sky_sphere { pigment { gradient y color_map { [0.000 0.002 color rgb <1.0, 0.2, 0.0> color rgb <1.0, 0.2, 0.0>] [0.002 0.200 color rgb <0.8, 0.1, 0.0> color rgb <0.2, 0.2, 0.3>] } scale 2 translate -1 } rotate -135*x } plane { y, 0 pigment { color Green } finish { ambient .3 diffuse .7 } }

The gradient pattern and the transformation inside the pigment are the same as in the example in the previous section.

The color map consists of three colors. A bright, slightly yellowish red that is used for the sun, a darker red for the halo and a dark blue for the night sky. The sun's color covers only a very small portion of the sky sphere because we don't want the sun to become too big. The color is used at the color map values 0.000 and 0.002 to get a sharp contrast at value 0.002 (we don't want the sun to blend into the sky). The darker red color used for the halo blends into the dark blue sky color from value 0.002 to 0.200. All values above 0.200 will reveal the dark blue sky.

The rotate -135*x statement is used to rotate the sun and the complete sky sphere to its final position. Without this rotation the sun would be at 0 degrees, i.e. right below us.


A red sun descends into the night.

Looking at the resulting image you'll see what impressive effects you can achieve with the sky sphere.


Section 4.9.2.3
Adding Some Clouds

To further improve our image we want to add some clouds by adding a second pigment. This new pigment uses the bozo pattern to create some nice clouds. Since it lays on top of the other pigment it needs some translucent colors in the color map (look at entries 0.5 to 1.0).

sky_sphere { pigment { gradient y color_map { [0.000 0.002 color rgb <1.0, 0.2, 0.0> color rgb <1.0, 0.2, 0.0>] [0.002 0.200 color rgb <0.8, 0.1, 0.0> color rgb <0.2, 0.2, 0.3>] } scale 2 translate -1 } pigment { bozo turbulence 0.65 octaves 6 omega 0.7 lambda 2 color_map { [0.0 0.1 color rgb <0.85, 0.85, 0.85> color rgb <0.75, 0.75, 0.75>] [0.1 0.5 color rgb <0.75, 0.75, 0.75> color rgbt <1, 1, 1, 1>] [0.5 1.0 color rgbt <1, 1, 1, 1> color rgbt <1, 1, 1, 1>] } scale <0.2, 0.5, 0.2> } rotate -135*x }


A cloudy sky with a setting sun.

The sky sphere has one drawback as you might notice when looking at the final image ( skysph3.pov ). The sun doesn't emit any light and the clouds will not cast any shadows. If you want to have clouds that cast shadows you'll have to use a real, large sphere with an appropriate texture and a light source somewhere outside the sphere.


Section 4.9.3
The Fog

You can use the fog feature to add fog of two different types to your scene: constant fog and ground fog. The constant fog has a constant density everywhere while the ground fog's density decreases as you move upwards.

The usage of both fog types will be described in the next sections in detail.


Section 4.9.3.1
A Constant Fog

The simplest fog type is the constant fog that has a constant density in all locations. It is specified by a distance keyword which actually describes the fog's density and a fog color .

The distance value determines the distance at which 36.8% of the background are still visible (for a more detailed explanation of how the fog is calculated read the reference section "Fog" ).

The fog color can be used to create anything from a pure white to a red, bloodish fog. You can also use a black fog to simulate the effect of a limited range of vision.

The following example will show you how to add fog to a simple scene ( fog1.pov ).

#include "colors.inc" camera { location <0, 20, -100> } background { colour SkyBlue } plane { y, -10 pigment { checker colour Yellow colour Green scale 20 } } sphere { <0, 25, 0>, 40 pigment { Red } finish { phong 1.0 phong_size 20 } } sphere { <-100, 150, 200>, 20 pigment { Green } finish { phong 1.0 phong_size 20 } } sphere { <100, 25, 100>, 30 pigment { Blue } finish { phong 1.0 phong_size 20 } } light_source { <100, 120, 40> colour White} fog { distance 150 colour rgb<0.3, 0.5, 0.2> }


A foggy scene.

According to their distance the spheres in this scene more or less vanish in the greenish fog we used, as does the checkerboard plane.


Section 4.9.3.2
Setting a Minimum Translucency

If you want to make sure that the background does not completely vanish in the fog you can set the transmittance channel of the fog's color to the amount of background you always want to be visible.

Using as transmittance value of 0.2 as in

fog { distance 150 colour rgbt<0.3, 0.5, 0.2, 0.2> }

the fog's translucency never drops below 20% as you can see in the resulting image ( fog2.pov ).


Adding a translucency threshold you make sure that the background does not vanish.


Section 4.9.3.3
Creating a Filtering Fog

The greenish fog we have used so far doesn't filter the light passing through it. All it does is to diminish the light's intensity. We can change this by using a non-zero filter channel in the fog's color ( fog3.pov ).

fog { distance 150 colour rgbf<0.3, 0.5, 0.2, 1.0> }

The filter value determines the amount of light that is filtered by the fog. In our example 100% of the light passing through the fog will be filtered by the fog. If we had used a value of 0.7 only 70% of the light would have been filtered. The remaining 30% would have passed unfiltered.


A filtering fog.

You'll notice that the intensity of the objects in the fog is not only diminished due to the fog's color but that the colors are actually influenced by the fog. The red and especially the blue sphere got a green hue.


Section 4.9.3.4
Adding Some Turbulence to the Fog

In order to make our somewhat boring fog a little bit more interesting we can add some turbulence, making it look like it had a non-constant density ( fog4.pov ).

fog { distance 150 colour rgbf<0.3, 0.5, 0.2, 1.0> turbulence 0.2 turb_depth 0.3 }


Adding some turbulence makes the fog more interesting.

The tubulence keyword is used to specify the amount of turbulence used while the turb_depth value is used to move the point at which the turbulence value is calculated along the viewing ray. Values near zero move the point to the viewer while values near one move it to the intersection point (the default value is 0.5). This parameter can be used to avoid noise that may appear in the fog due to the turbulence (this normally happens at very far away intersecion points, especially if no intersection occurs, i. e. the background is hit). If this happens just lower the turb_depth value until the noise vanishes.

You should keep in mind that the actual density of the fog does not change. Only the distance-based attenuation value of the fog is modified by the turbulence value at a point along the viewing ray.


Section 4.9.3.5
Using Ground Fog

The much more interesting and flexible fog type is the ground fog, which is selected with the fog_type statement. It's appearance is described with the fog_offset and fog_alt keywords. The fog_offset specifies the height, i. e. y value, below which the fog has a constant density of one. The fog_alt keyword determines how fast the density of the fog will approach zero as one moves along the y axis. At a height of fog_offset+fog_alt the fog will have a density of 25%.

The following example ( fog5.pov ) uses a ground fog which has a constant density below y=25 (the center of the red sphere) and quickly falls off for increasing altitudes.

fog { distance 150 colour rgbf<0.3, 0.5, 0.2, 1.0> fog_type 2 fog_offset 25 fog_alt 1 }


The ground fog only covers the lower parts of the world.


Section 4.9.3.6
Using Multiple Layers of Fog

It is possible to use several layers of fog by using more than one fog statement in your scene file. This is quite useful if you want to get nice effects using turbulent ground fogs. You could add up several, differently colored fogs to create an eerie scene for example.

Just try the following example ( fog6.pov ).

fog { distance 150 colour rgb<0.3, 0.5, 0.2> fog_type 2 fog_offset 25 fog_alt 1 turbulence 0.1 turb_depth 0.2 } fog { distance 150 colour rgb<0.5, 0.1, 0.1> fog_type 2 fog_offset 15 fog_alt 4 turbulence 0.2 turb_depth 0.2 } fog { distance 150 colour rgb<0.1, 0.1, 0.6> fog_type 2 fog_offset 10 fog_alt 2 }


Quite nice results can be achieved using multiple layers of fog.

You can combinate constant density fogs, ground fogs, filtering fogs, non-filtering fogs, fogs with a translucency threshold, etc.


Section 4.9.3.7
Fog and Hollow Objects

Whenever you use the fog feature and the camera is inside a non-hollow object you won't get any fog effects. For a detailed explanation why this happens see "Empty and Solid Objects" .

In order to avoid this problem you have to make all those objects hollow by either making sure the camera is outside these objects (using the inverse keyword) or by adding the hollow to them (which is much easier).


Section 4.9.4
The Atmosphere

The atmosphere feature can be used to model the interaction of light with particles in the air. Beams of light will become visible and objects will cast shadows into the fog or dust that's filling the air.

The atmosphere model used in POV-Ray assumes a constant particle density everywhere except solid objects. If you want to create cloud like fogs or smoke you'll have to use the halo texturing feature described in section "Halos" .


Section 4.9.4.1
Starting With an Empty Room

We want to create a simple scene to explain how the atmosphere feature works and how you'll get good results.

Imagine a simple room with a window. Light falls through the window and is scattered by the dust particles in the air. You'll see beams of light coming from the window and shining on the floor.

We want to model this scene step by step. The following examples start with the room, the window and a spotlight somewhere outside the room. Currently there's no atmosphere to be able to verify if the lighting is correct ( atmos1.pov ).

camera { location <-10, 8, -19> look_at <0, 5, 0> angle 82 } background { color rgb <0.2, 0.4, 0.8> } light_source { <0, 19, 0> color rgb 0.5 atmosphere off } light_source { <40, 25, 0> color rgb <1, 1, 1> spotlight point_at <0, 5, 0> radius 20 falloff 20 atmospheric_attenuation on } union { difference { box { <-21, -1, -21>, <21, 21, 21> } box { <-20, 0, -20>, <20, 20, 20> } box { <19.9, 5, -3>, <21.1, 15, 3> } } box { <20, 5, -0.25>, <21, 15, 0.25> } box { <20, 9.775, -3>, <21, 10.25, 3> } pigment { color red 1 green 1 blue 1 } finish { ambient 0.2 diffuse 0.5 } }


The empty room we want to start with.

The point light source is used to illuminate the room from inside without any interaction with the atmosphere. This is done by adding atmosphere off . We don't have to care about this light when we add the atmosphere later.

The spotlight is used with the atmospheric_attenuation keyword. This means that light coming from the spotlight will be diminished by the atmosphere.

The union object is used to model the room and the window. Since we use the difference between two boxes to model the room (the first two boxes in the difference statement) there is no need for setting the union hollow. If we are inside this room we actually will be outside the object (see also "Using Hollow Objects and Atmosphere" ).


Section 4.9.4.2
Adding Dust to the Room

The next step is to add an atmosphere to the room. This is done by the following few lines ( atmos2.pov ).

atmosphere { type 1 samples 10 distance 40 scattering 0.2 }

The type keyword selects the type of atmospheric scattering we want to use. In this case we use the isotropic scattering that equally scatters light in all directions (see "Atmosphere" for more details about the different scattering types).

The samples keyword determines the number of samples used in accumulating the atmospheric effect. For every ray samples are taken along the ray to determine wether a sample is lit by a light source or not. If the sample is lit the amount of light scattered into the direction of the viewer is determined and added to the total intensity.

You can always start with an arbitrary number of samples. If the results do not fit your ideas you can increase the sampling rate to get better results. The problem of choosing a good sampling rate is the trade-off between a satisfying image and a fast rendering. A high sampling rate will almost always work but the rendering will also take a very long time. That's something to experiment with.

The distance keyword specifies the density of the atmosphere. It works in the same way as the distance parameter of the fog feature.

Last but not least will the scattering value determine the amount of light that is scattered by the particles (the remaining light is absorbed). As you'll later see this parameter is very useful in adjusting the overall brightness of the atmosphere.


After adding some dust beams of light become visible.

Looking at the image created from the above scene you'll notice some very ugly anti-aliasing artefacts known as mach-bands. They are the result of a low sampling rate.

How this effect can be avoid is described in the following section.


Section 4.9.4.3
Choosing a Good Sampling Rate

As you've seen a too low sampling rate can cause some ugly results. There are some ways of reducing or even avoiding those problems.

The brute force approach is to increase the sampling rate until the artefacts vanish and you get a satisfying image. Though this will always work it is a bad idea because it is very time consuming. A better approach is to use jittering and anti-aliasing first. If both features don't help you'll have to increase the sampling rate.

Jittering moves each sample point by a small, random amount along the sampling direction. This helps to reduce regular features resulting from aliasing. There is (hardly) nothing more annyoing to the human visual system than the regular features resulting from a low sampling rate. It's much better to add some extra noise to the image by jittering the sample positions. The human eye is much more forgiving to that.

Use the jitter keyword followed by the amount of jittering you want to use. Good jittering values are up to 0.5, higher values result in too much noise.

You should be aware that jittering can not fix the artefacts introduced by a too low sampling rate. It can only make them less visible.

An additional and better way of reducing aliasing artefacts is to use (adaptive) super-sampling. This method casts additional samples where it is likely that they are needed. If the intensity between two adjactent samples differs too much additional samples are taken inbetween. This step is done recursively until a specified recursion level is reached or the sample get close to each other.

The aa_level and aa_threshold keywords are used to control the super-sampling. The aa_level keyword determines the maximum recursion level while the aa_threshold keyword specifies the maximum allowed difference between two sample before the super-sampling is done.

After all this theory we get back to our sample scene and add the appropriate keywords to use both jittering and supersamling ( atmos3.pov ).

atmosphere { type 1 samples 50 distance 40 scattering 0.2 aa_level 4 aa_threshold 0.1 jitter 0.2 }

A very low threshold value was choosen to super-sample even between adjactent points with a very similar intensity. The maximum recursion level of 4 will lead to a maximum of fifteen super-samples.

If you are looking at the results that you get after adding jittering and super-sampling you won't be satisfied. The only way of reducing the still visible artefacts is to increase the sampling rate by choosing a higher number of samples.


A high sampling rate leads to a satisfying image.

Doing this you'll get a good result showing (almost) no artefacts. Btw. the amount of dust floating around in this room may be a little bit exaggerated but it's just an example. And examples tend to be exaggerated.


Next Section
Table Of Contents