By continuing to use the site you agree to our Privacy & Cookies policy

Rendering: let there be light

Architech: A guide to finding the best lighting solutions for your cad images

In 1839, William Henry Fox Talbot announced that he had perfected a method of recording an image produced within a camera obscura on chemically treated paper. Recording an image using only light was a revolutionary breakthrough - the French painter Paul Delaroche declaring: 'From this day on painting is dead' - and led to an era of great visual literacy that continues today.

The advent of advanced computer graphics technology has resulted in an increasing reliance on complex visual models as the primary means for designers to communicate their ideas.

cad is now a mature technology, yet continues to evolve. At its inception, cad was seen as a productivity tool. Now, 2D-computer draughting has largely superseded manual draughting in all disciplines, with the engineering professions leading the way. It has not involved any shift in the working paradigm, rather the traditional tools have simply been replaced by new ones. However, the move to 3D modelling has involved a shift in design methodology. Objects, be they buildings or mechanical components, are generated as 3D representations, not just a collection of flat lines. These models can be endowed with real-world properties allowing complex testing to be conducted without the need to physically construct anything. This new method of working has led to the current evolutionary stage: the use of visualisation tools throughout the design process and not just as an afterthought.

Visualisation tools range from stationary rendered images to the virtual worlds within which users can navigate.

One of the most important and least discussed areas in the production of visually stunning still or moving images is lighting. The lighting stage comes when the model is complete and all surface textures have been applied to the elements.

There are three basic types of light source: ambient, point and spot. Ambient light is the general level of light within a scene. It is non- directional, producing the same effect on every surface. Point lights are multi-directional sources were rays are given direction but are parallel and of equal strength; a light bulb is an example of a point light, although in complex models the fact that the bottom of the bulb does not emit light has to be considered. Many packages have an option to simulate sunlight, like a point light but with specialised settings. The final light source, the spotlight, is a directional source with a limited beam width, with the angle of the beam adjustable to create an unlimited number of variants. Spot and point light sources are subject to the inverse square law where the light intensity decreases in proportion to the distance from the source. This is useful as it allows the intensity of light effects, such as the casting of shadows, to be subtly adjusted.

When lighting a scene or an object, the first step is to decide how you want your model to look. This is not as simple as it might seem: you need to consider a number of factors. One approach is to create a realistic lighting environment, which will show a model as it would appear if it were real. Software such as LightScape from LightScape Technologies allows you to place lights modelled on real lamps into a scene. By using photometric data the software renders as near to real life as possible. It also allows the designer to perform lighting studies on a model to ascertain the level of illumination at any given point.

Strong spotlight illumination produces a sharp contrast between highlights and shadows. This is very effective when used with highly reflective surfaces. The harshness of shadows can be reduced by using the ambient light controls. By increasing the level of ambient light, all the surfaces in a model will receive more light, allowing areas in shadow to become more visible.

Placing lights within a scene is often a matter of trial and error. As a rule, it is not a good idea to place too many lights in a scene since surfaces can become bleached out. It is wise to avoid having a light pointing either from the camera position or directly at it since this results in loss of definition. The exception to this are animated - rather than still - images where lens flare effects can be desirable.

The colour of light is also significant in determining the final image, and all good rendering software allows this to be altered. A basic knowledge of the colour wheel is useful here as this identifies opposite and complementary colours. For example, if a coloured light is used with a coloured object, and the colour of the light is opposite on the colour wheel to that of the object, then the object will appear black. When rendering images under specialist lighting conditions, support from photometric data is important. This allows you to create the correct lighting colour for different lamps. An object lit with a halogen lamp will appear different if lit with tungsten or a fluorescent tube. In product and architectural design, photometric data can be obtained from lighting manufacturers if the lamp you are going to use is known.

Once you are happy with your lighting, it's time to render your image. Whether you are producing a still or an animation, the selection of the rendering engine can have a dramatic effect on the final results. At first, the number of rendering options may seem daunting but for final work four different options are normally available: Gouraud, Phong, Raytracing and Radiosity.

Developed in the early 1970s, Gouraud shading averages the light intensities at the edges of the polygons forming an object. It then interpolates across the plane between these averages to give a smooth gradation. Phong rendering is a refinement of this technique adding specular highlights to the smooth shading. Surfaces with different levels of shininess can be calculated by using a gloss parameter to determine a highlight area. Both Gouraud and Phong rendering tend to produce 'jaggies' on the edges of curved objects. However, surfaces appear smooth and jaggies can be reduced by increasing the number of polygons used to form an object, though this increases the file size and reduces the otherwise fast rendering time. Gouraud and Phong rendering are often termed 'local illumination models' as they only consider the light hitting a surface directly from the light source. Phong rendering can display reflections but not transparency.

More complex algorithms that evaluate how surfaces themselves affect the light are known as global illumination models. Raytracing has become the most popular and widespread method of rendering still images using a global model. This works by tracing the light rays backwards from each pixel on the screen onto the 3D scene. Rays are traced back from the eye position through a pixel on the screen until it hits a surface. The surface texture provides the information on reflectivity but not the total amount of light reaching the surface. Rays are then traced from the intersection point on the surface to each light source. The total of the unblocked rays is calculated and this total determines the colour of the surface. If a surface is reflective or transparent, the algorithm has to determine what is seen through or reflected. In the case of reflection, the tracing process is repeated using the surface as a source, while transparent surfaces transmit rays. Raytracing renders most lighting situations very well and is excellent for rendering reflective and transparent surfaces. Its main disadvantage is that it does not account for diffuse interreflections, ie light arriving from other surfaces. This problem can often be solved by adding an ambient light source, though this can produce very flat-looking images as it has a constant value with no relation to real-world situations containing diffuse light.

In the early 1980s, another algorithm was developed to address these issues: Radiosity. This is based on a system developed by thermal engineers to simulate radiative heat transfer between surfaces and calculates the intensity of light for given points in the 3D space rather than the colour of a pixel on the screen. Surfaces are divided into a mesh of smaller units and the amount of light distributed from each mesh element to the others is then calculated. This data is then stored as a 3D matrix and is 'view-independent' as the data is spatial in nature. This allows new views to be drawn quickly on screen as the bulk of the calculations have already taken place. Users can also get instant feedback on screen by using progressive radiosity algorithm: the process starts using a large mesh and then refines the images using smaller and small subdivisions of surfaces. Where Raytracing excels in rendering reflective and transparent surfaces, Radiosity excels in rendering matte surfaces where subtle shading and shadows occur. The major disadvantage of this system is that it does not account for specular reflections or transparency effects. It also requires large amounts of memory.

Scanline rendering, which may also be offered in high-end packages, renders an image one line at a time. It supports shadows, reflectivity and transparency, but unlike Raytracing it can render polylines and pick up extremely fine detail missed by other methods.

There is no perfect rendering engine: the best software allows you to mix and match your rendering techniques depending on the image you are producing. Each vendor implements the algorithm in a different way, so careful study of your manuals is advised before altering any settings. In some cases such as LightScape you can combine methods in a single image to produce stunning images.

Have your say

You must sign in to make a comment.

The searchable digital buildings archive with drawings from more than 1,500 projects

AJ newsletters