Different rendering modes exist, from texture mapping to e.g. radiosity solutions. We will address the following approaches.
Texture maps coming from different cameras have to be combined in order to visualize the scene from an arbitrary viewpoint. Approaches of so-called view-dependent texture mapping produce good results, but ideally require relatively densely distributed and/or many input images. In our case, input images stem from few cameras that are quite distant from each other. Due to imperfect geometrical modelling, texture maps from different images may not be well-aligned spatially. We want to provide solutions to this problem. Another issue is a potential bad photometric alignment. A full photometric model of the observed objects would allow creating coherent texture maps, but it might be interesting to study intermediate approaches, e.g. based on simpler models of the change in appearance due to changes in viewing position.
Shadows are very important for the realistic aspect of rendered scenes, concerning the geometric level as well as the graphical aspect.
Light exchanges can be computed based on the established geometric and photometric models. Light exchanges are of two types: local effects are directly due to light sources (direct illumination, cast shadows). Global effects are due to light reflected by objects in the scene onto other objects or themselves. The associated computations are very complex, and trade-offs between resolution, amount of pre-computations, and computation time have to be made.