Traces Real Axion X-ray Emission Rays ? Or something like that. Who knows, naming things is always fun huh.
This is a fork of my Raytracing In One Weekend implementation (see Ray Tracing in One Weekend), which majorly extends it.
It is an easier to use, extend and interact with version of the raytracer part of https://github.com/jovoy/AxionElectronLimit.
It adds some features of the second book, others from the amazing https://pbr-book.org/, but most importantly makes it a useful tool for my work:
It is now a raytracer for X-rays (in the context of Axion helioscopes like CAST or (Baby)IAXO).
To make it actually useful, it has an interactive raytracing mode
using SDL2, so that you can investigate the scene geometry in real
time. The standard approach to sample rays from the camera is extended
by a secondary ray sampling, which samples directly from light sources
towards LightTargets
. In addition with ImageSensors
we can
visualize the accumulation of rays on a sensor in the scene (and save
them to binary buffers with F5).
Every time you move the camera the image buffer will be deleted. Until you do so however, it will accumulate more and more rays.
The only real dependency is SDL2, which should be easy to install on all linux distributions.
Outside of that it uses malebolgia for multithreading. In addition for
the --solarModelFile
parsing it depends on
https://github.com/SciNim/datamancer and its dependencies.
Install both via:
nimble install malebolgia datamancer
Once you click on the window, it activates mouse control. To exit mouse control, press Escape.
W, A, S, D
(or arrow keys): movementCtrl, Space
: up, down (a bit broken)T
: switches to light tracing mode when single threaded (irrelevant nowadays)PageUp, PageDown
: Increase / decrease the movement speedTAB
: invert the view front to backF5
: Save the current buffers (image, counts and allImageSensors
) to theout
directory, which can be plotted using theplotBinary
helper script.
Auto generated CLI by cligen. Help texts still need to be added, but the basics should be pretty obvious.
Usage main [optional-params] Looking at mirrors! Options: -h, --help print this cligen-erated help --help-syntax advanced: prepend,plurals,.. -w=, --width= int 600 set width -m=, --maxDepth= int 5 set maxDepth -s=, --speed= float 1.0 set speed --speedMul= float 1.1 set speedMul --spheres bool false set spheres -l, --llnl bool false set llnl --llnlTwice bool false set llnlTwice -f, --fullTelescope bool false set fullTelescope -a, --axisAligned bool false set axisAligned --focalPoint bool false set focalPoint -v=, --vfov= float 90.0 set vfov -n=, --numRays= int 100 set numRays --visibleTarget bool false set visibleTarget -g, --gridLines bool false set gridLines -u, --usePerfectMirror bool true set usePerfectMirror --sourceKind= SourceKind skSun select 1 SourceKind --solarModelFile= string "" set solarModelFile --nJobs= int 16 set nJobs -r=, --rayAt= float 1.0 set rayAt
Pictures of the first implementation of the LLNL CAST telescope setup (without any external pipes of course!)
You get this scene if you use the --llnl
command.
You get this scene if you use the --llnl --fullTelescope
command.
You get this scene if you use the --llnl --llnlTwice
command.
- [ ] Use colormap for the
ttLights
view -> Will this even remain? We could also implement it by just having the key bring the camera directly to the image sensor and/or reading from thesensorBuf
instead of having a separate sampling proc - [X] Refactor
ttLights
code to also work for multithreaded -> Maybe just rely on the actualSensor
buffer? And then when inttLights
mode we could only sample rays based on the sources and simply display the image sensor buffer? That way we wouldn’t get into the trouble of having to read and write from the temp buffer in places we don’t know where they might end up on.- [X] the underlying buffer is now protected by a
Lock
, both for read and write access - At this point it should be pretty much really GC safe. Each thread
has its own RNG,
Camera
andHittablesList
. Relevant information between them is synced after each tracing run is done.- [X] ORC is still not happy though.
-> I marked all ref types as
acyclic
(hittables and the textures). This seems to have fixed it. They are indeed acyclic. We have no intention of constructing ref cycles after all.
- [X] ORC is still not happy though.
-> I marked all ref types as
- [X] the underlying buffer is now protected by a
- [X] Disable sensitivity of
ImageSensor
to rays emitted by theCamera
directly! (I guess they kind of act like noise haha) -> I gave theRay
type aRayType
field that indicates if the initial ray was created from theCamera
or directly from a light source. NOTE: This currently implies that if a ray is shot from the camera, hits a light source and then hits an image sensor, no counts are recorded on the image sensor! Only the rays we sample directly from light sources count. BUT: Currently our light sources act as perfect sinks anyway! They do not scatter light, so this scenario does not exist anyway. - [ ] A BVH node of the telescope is currently broken! It causes all sorts of weirdness. The shape is correct, but the lighting is very wrong!
- [X] make the multithreaded code safer. It’s still a bit crazy. -> taken care of by using acyclic & a lock on the sensor
- [ ] Implement X-ray reflectivities
- [X] Check LLNL telescope setup
- [ ]
- [X] Implement
Target
material that is used when sampling rays forDiffuseLight
. - [X] For parallel light use
Laser
instead! - [X] Implement
SolarEmission
material to sample correctly based on flux per radius CDF - [ ] COULD WE add an option to “draw” a sampled ray? I think it
should be pretty easy!
- have a button, say F8, when pressed it samples a ray
- trace the entire ray and mark each hit, store each intermittent ray
- construct cylinders for each ray and overlay them on the sampled
ray
-> How?
- Use ray origin as target position for the cylinder origin
- Use ray direction as guide for required rotation
- Use (hit position - start) as length, making it end at the hit
- add these cylinders to the
world
HittablesList
- update the
world
of each threads data. - upon new sampling the ray should appear.
Potential problem: The added ray interferes with the image we see on the
ImageSensor
! -> Add only to theworld
that is used for theCamera
! - [ ] Add
ImageSensorRGB
which stores the colors in each pixel. That way can give each layer a color, e.g. using ggplot2 colors and then can tell where rays from each layer end up on the sensor! - [ ] Implement an
ImageSensorVolume
thing. Then sample a conversion point at that distance and take the projected point as the point hit on the actual readout! This allows us to do simplified gas detectors. How? Material of the ‘box’ would be the volume. If hit, sample a conversion point given gas (setup initially) and ray energy. Based on that, propagate the corresponding distance and activate pixel at the ‘bottom’! I like this idea. WE COULD EVEN couple this idea with our fake event generator! For each conversion point generate an event and process! -> If we did that we could have a real time event simulator, lol, by resetting theImageSensorVolume
and lowering the of rays shot to the detector! -> Well: Downside is of course that the camera still has to sample rays to see what is shown on theImageSensorVolume
in the live event display. For that we need multiple second exposures to see stuff after all. Combining that makes it not very live anymore. We would need some way to directly map image sensor data to the camera. That might not be too hard if we had something more like a raycasting approach to theImageSensor
. I.e. before tracing a ray in the classical approach, do a test ray cast and see if we hit a pixel and in that case neglect the actual raytracing? But aren’t we already doing that in some sense? If we hit an ImageSensor we don’t actually continue. Ah! But we still check everything in the scene to be sure nothing is closer than the sensor! I guess that makes it slow? Need the bounding boxes working! -> Fixing those should make sampling towards the image sensor very fast, if my theory is correct.
- [ ] Add new shape that allows subtracting one shape from
another. I.e. have a rectangle and subtract a disk from the center.
Should be easy, just combine their individual
hit
using boolean logic. Subtraction is: hitsA and not hitsB for example. - [ ] ADD SANITY CHECK that checks that in our current model indeed the bottom part of layer i+1 is perfectly flush with the top part of layer i!
- [ ] Implement alternative to
sSum
where we keep the entire flux data! This would allow us to directly look at the axion image at arbitrary energies. It would need a 3DSensor3D
to model flux as 3rd. Note: when implementing that it seems a good idea to order the data such that we can copy an entire Spectrum row into the sensor. So the last dimension should be the flux, i.e.[x, y, E]
. That way copying theSpectrum
should be efficient, which should be the most expensive operation on it. - [ ] implement toggle between XrayMatter behavior for X-rays and
light for
Camera
! - [ ] FIX
Camera
usage of attenuation forXrayMatter
when callingeval
! Need to convert toRGBSpectrum
NOTE: The LLNL simulation is slightly wrong in the sense that the mirrors actually become smaller towards the end with smaller radius, because we have real cone cutouts. In reality it starts from flat glass with 225 mm length and a fixed width, which is just curved to a cone shape. That means the required ‘angle’ φ_max actually increases along the axis. We don’t model this, but the effect of this should be very marginal. See fig. 1.4 of the DTU thesis about the LLNL telescope.