Replies: 3 comments
-
Hi felme1, According to the FAQ here, you can roughly estimate the memory requirements by 96sxsyszr^3 Bytes of memory, so plugging in 963310.56pow(res,3)*(1GB/1e-9B) ~= 25GB of memory. However, I suspect the main cause of the high memory usage is not Meep but instead your "SimulationHandler" class. It appears that every dt = 1, "updateField" takes a slice of the cell and pulling 6 values, so each slice 8*(33x50)(10.550) ~5.2 million values or about 42 MB per timestep. It then appends the 6 slices to 6 running lists, which you are holding in memory via the numpy arrays, meaning by the end of the 260 timetsteps there will be an extra ~11 GB of data in memory. I'm not sure where the very large (~TB?) level of memory usage you saw may be coming from, but Python list appends are done "over-allocation" if the number of list elements exceeds certain sizes, which may be account for some extra. Since it doesn't look like this SimulationHandler logging is being used to calculate any further, I would suggest either removing it and relying on your S-parameter calculation, or streaming the data to disk directly instead of storing in RAM. Does not seem like a Meep issue. Hope this helps! |
Beta Was this translation helpful? Give feedback.
-
Hi theogdoctorg, thank you for the quick reply! However I still have some questions:
Thank you in advance! |
Beta Was this translation helpful? Give feedback.
-
Dispersive materials definitely take more memory and computation time. It depends on how many Lorentz–Drude terms you are using — I would suggest fitting a minimal number to your bandwidth of interest rather than using the materials library (which might use a much larger number to fit a larger bandwidth).
Are you using an MPI version of Meep? Check that (Note that even for a parallel version of Meep you won't get linear speedup — with all parallel programs, there is eventually a diminishing return as you add more processes, depending on the computation time.) |
Beta Was this translation helpful? Give feedback.
-
Hello,
I am running the following code on a rocky linux machine. Everything seems to be working just fine, at least the data makes sense. However the memory required to run this simulation is already quite large. With is_3D set to False it used up more then 20GB of ram storage, and when I run it with is_3D set to True the used storage went up to 1.5 TB. Offcourse the runtime is very long as well. The 3d run took about 24 hours.
This seems crazy high to me, am I doing something wrong here?
The simulated volume is 33um by 10.5um by 6um with a resolution of 50. (This res is chosen as I would like to use dispersive materials) I understand that this might be quite a large simulation, but 1.5TB still seems quite excessive to me.
Ideally I would like to use the dispersive materials, but when I try to run the simulation using these materials the 2.2TB storage on this machine, that my university gave me access to is not enough.
Any help or clarification on this would be greatly appreciated!
Here is a picture of the simulation setup, so you can get a feeling for the gds file as well. I have also attatched the compressed gds file as well as the full code.
hybrid_coupler.zip
meep_sim_of_hybrid_coupler.zip
Beta Was this translation helpful? Give feedback.
All reactions