In the following discussion all units are natural (i.e.
We have a set of points in 3D space which are then triangulated to represent the surface of a 3D object. This is a triangular mesh.
Each of the vertices in the mesh is tranformed into camera space, Lorentz boosted and then transformed to screen space.
The shader is a little more complex, since we also support objects that are moving and not just the camera.
We also have a scene, which is made up of a camera, lights and meshes and a set of transformations which tell us how to translate, rotate and scale the objects.
Given a coordinate
For the background we have points at infinity so the above formulas don't work. But we can derive a boosted direction vector by taking the above expression and taking a limit as the length of
where
The only information we have about the color of an object is from RGB textures. So we assume that for a given texture color, the light reflected from that point in the point's reference frame will be given in three wavelengths: red, green and blue in proportion to the RGB texture values. For mapping purposes, I assumed the following wavelength conversions:
Color | Wavelength (nm) |
---|---|
Red | 626 |
Green | 534 |
Blue | 465 |
For mapping intensity, I didn't use anything formal and instead used a mapping that resulted in something that looks reasonable.