The GraviLens project was conceived with the goal of providing real-time simulations of gravitational lensing in immersive virtual environments; its successes have, moreover, revealed the usefulness of investigating relativistic phenomenon in the context of shared (e.g. networked) immersive environments. The results have been demonstrated at Alliance98 and Supercomputing98; binaries for the SGI platform running Dave Pape's CAVE libraries are also available from this page . Herein we shall briefly describe the subject and certain technical aspects of our implementations to date. Theoretical aspects of the approximations used are relegated to an appendix.
Gravitational lensing refers to a phenomenon resulting from the general relativistic effect of the deflection of photons by massive objects: said deflection incurs a distortion of the perceived environment and the appearance of the ``optical'' effect of multiple images. The subject is a classical one, gravitational lensing having been predicted by Einstein , and the possibility of which having been verified in his lifetime, with the detection of the deflection of light by the sun. Einstein did not expect lensing to be a perceptible phenomenon, as deflection by our sun is of such small magnitude. What he had not foreseen was the, since observed, potential for whole galaxies to act as lensing objects.
The subject has subsequently become of abiding research interest and has been popularized in the well known image of the Smithsonian Institute , the images of Werner Benger, and the recent New York Times report on the discovery of a particularly symmetrical lensing galaxy. Despite this popularity, no one, to our knowledge, had yet investigated real-time simulations.
The importance of a real-time simulation lies in the dynamical aspects of the experience: in an immersive environment, one may vary the mass of the object causing the lensing effects (in our case, a massive star) and may vary the position of observation, thus experiencing directly the viewpoint dependence of the observed distortion. Conversely, the viewpoint dependence of the calculations involved makes the problem particularly suited to investigation, development, and debugging of shared virtual environments, for, instead of a pre-given scene shared (say, over a network) among users, the scene itself is defined by the respective participants. It is then a simple matter to compare views (our ecstatic mode), further enriching the experience both of shared immersive experiences, and of the phenomenon of lensing itself.
This unique aspect of the problem, that of viewpoint dependent calculation, led us to first develop the simulation for the DuoDesk, a two person (two head trackers, consequently two independent views) Immersadesk developed at EVL, UIC. The successful implementation of this was demonstrated at Alliance 98. A second implementation was developed for networked immersive environments, CAVE-to-CAVE and CAVE-to-DuoDesk, respectively; this was demonstrated at Supercomputing98, running between Orlando and Champaign. These instances reveal the accessibility and appeal of interactive simulations of gravitational lensing, as well as the pertinence of the problem for the development of immersive environments.