New benchmarks discussion #6
-
Desired improvements to benchmarksTwo things I'd like to improve upon from the benchmarks in the
Preliminary research
Potential for collaboration?I imagine a I will likely maintain some repo of config files for running the containers over on the actuarialopensource github org. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 14 replies
-
Re: Benchmarks All good ideas - I originally wanted to create runnable containers for the reasons you listed but gave up for the time being due to the complexity of putting it all into containers. One thing about cloud benchmarks though - unless you are running on dedicated hardware, you can get temporary slowdowns as the server you are running on gets a heavy request from another user. You can smooth this out by averaging across many attempts, but for the extra runtime you might as well buy dedicated server time. Re: Workloads Agreed about the workloads - comments on the existing:
Re: Lifelib alternative I love the idea of a more fully featured modeling option. Something I've thought a lot about and ultimately want to build up to. It should probably not use the same "lifelib" name to allow lifelib to retain its well-deserved acclaim. What are you thinking for design of such a package? Some misc thoughts I've had:
|
Beta Was this translation helpful? Give feedback.
Re: Benchmarks
All good ideas - I originally wanted to create runnable containers for the reasons you listed but gave up for the time being due to the complexity of putting it all into containers.
One thing about cloud benchmarks though - unless you are running on dedicated hardware, you can get temporary slowdowns as the server you are running on gets a heavy request from another user. You can smooth this out by averaging across many attempts, but for the extra runtime you might as well buy dedicated server time.
Re: Workloads
Agreed about the workloads - comments on the existing: