This work package concentrates on the core of the radiosity algorithm concerning error control, visual appearance and refinement techniques.

The goal of this work package is to ensure physical correctness of the simulation within a given error bound. Furthermore, the effects of visible artifacts have to be reduced and a more reliable visibility classification scheme will be introduced. Unnecessary element refinement has to be prevented. Finally, the many radiosity parameters have to be simplified to enable users without particular and detailed knowledge of the radiosity theory to create lighting simulations for arbitrary scene configurations.

The computation of the error incurred at various levels of the hierarchy, in the course of a radiosity calculation, is a necessary step to allow proper steering of the simulation. However, the real issue with such error bounds is that they tend to be too conservative, and therefore quickly become meaningless.

The approach chosen by the ARCADE consortium is to first recognize that the notion of error is highly application-dependent, then to develop empirical methods based on the estimation of error bounds, and leading to meaningful assessments of a solution's quality.

We also note that error estimation should be computed automatically by the system, to allow for refinement systems that are as automatic as possible. This is extremely important for improving the usability of radiosity in real-world applications, especially for end-users who are not familiar with the details of the radiosity process. The definition of error bounds for the clustering algorithm present a particular challenge, which will be addressed.

This work package will also address the particular issue of error bounding for linear radiosity elements, since they are well suited to real-time display of the computed solutions using existing 3D graphics hardware.

The envisaged strategy consists of using gradient information to bound the radiosity error. The gradient can be used for answering the question: "Does a linear interpolation between the vertex radiosities reconstruct the radiosity function within a given error bound?"

To increase the quality of the visual appearance of the radiosity solution the visibility between polygons has to be classified reliably into visible, occluded and partially occluded regions.

The reduction and elimination of typical artifacts of radiosity solutions will be covered by this work package (e.g. mach bands), as well as work package one (e.g. shadow leaks).

For the second part of this work package new refinement "oracles" will be derived according to the error metrics established during the first part. In addition, the corresponding radiosity parameters will be simplified and automated, since common experience with all existing hierarchical radiosity systems shows that it still requires expert knowledge to choose all simulation parameters properly. Typical issues include scalability (scene independence) and control of the mesh "sensitivity".

Currently we are identifying different types of users of a lighting simulation system (such as lighting designers, computer animation creators or interactive use in VR applications), having different needs and expectations from such a system (high quality, interactive update, physical correctness) and using entirely different types of scenarios (interior, exterior, daylight, architectural, car design, ...).

We are comparing different approaches to compute error bounds for hierarchical radiosity algorithms and can identify the need to distinguish between a (constant) bound on the energy transfer and a (linear) bound on the radiosity function on a receiver.

We are compiling various approaches for visibility detection and classification to find methods that are most suitable within the ARCADE context.

The task of determining appropriate refinement criteria is inherently linked to parts of work package two, where patch data structures will be defined; e.g., is it desired to use vertex radiosities, or are standard patch radiosities sufficient or even necessary to keep the complexity of the algorithm under a certain level?

The results of this work package are (or will be) presented in the deliverables.