Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dahsboard displaying multiple tasks per core and multiple reads/writes per disk #170

Closed
henricasanova opened this issue May 15, 2020 · 6 comments · Fixed by #204
Closed
Assignees
Labels
Milestone

Comments

@henricasanova
Copy link
Contributor

henricasanova commented May 15, 2020

Currently the dashboard Host Utilization graph assumes that there is at most one task per core, and does not display disk usage. But in fact, in many simulations there can be more tasks than cores on a host, and more than one transfer at a time per disk. Here are some thoughts on how to display these situations, starting with the disk, and then showing how a host could be done almost similarly.

Disk: We can easily identify the time intervals during which the number of ongoing disk operations is constant. Say that between times t1 and t2 there are n ongoing operations. Then we simply show n rectangles splitting the disk bandwidth (i.e., the height of the pink rectangle in the display) into n identical rectangle, each of them with 1/n-th of the full height of the disk bandwidth. The one drawback is the situation in which, say, from t1 to t2 we have 2 operations, and one of them finishes at time t2. From t2 to t3 the other transfer continues. As described above, the display will look exactly the same if both these operations finish at time t2 and a third transfer starts at time t2. A much fancier display would remove some vertical lines at time t2 to show that one transfer first proceeds at 1/2 the bandwidth and then proceeds at full bandwidth (in other words, the transfer is not shown as two rectangles, but instead as some L-shaped polygon). Creating this fancier display in arbitrary cases is likely NP-hard though...

Network Link: No issue because we do not differentiate between individual network flows (we don't have that information)

Host: The idea here is to do the same as for the disk if the number of tasks is greater than the number cores, and what we currently do if it's smaller. This makes sense in terms of what the simulation actually does. Say we have a 4-core host with 5 tasks. Each task really gets 4/5 of a core. So we should show them as 5 identical rectangles of height 4/5. Here again there could be a fancier display to show tasks as polygons, but it's likely NP-hard...

@rafaelfsilva
Copy link
Member

@henricasanova, do we currently have an example (i.e., JSON file) in which multiple I/O operations happens simultaneously in the same disk? I was thinking on how to display such data and it can really be tricky. It would be great if you could generate an example JSON file so I could put more thinking into that. Thanks!

@henricasanova
Copy link
Contributor Author

This is the same issue as plotting multiple compute operations on a single core basically. I can't think of an example. I'll make up a simulator to generate that. Should be pretty easy.

@henricasanova
Copy link
Contributor Author

Here is a json file form a one-off simulator I developed. Let me know if it's sufficient. There is one disk that has concurrent reads.

concurrent_io.json.txt

@rafaelfsilva rafaelfsilva modified the milestones: 1.7, v1.8 Sep 2, 2020
@gjethwani gjethwani mentioned this issue Oct 13, 2020
@henricasanova
Copy link
Contributor Author

@gjethwani @rafaelfsilva Here is another json file that's more complicated. It's generated by a one-off simulator available at: https://github.com/wrench-project/simulator_io_for_dashboard_testing (should be easy to modify to generate all kinds of output).
simulation.json.txt

@gjethwani
Copy link
Contributor

Hey @henricasanova @rafaelfsilva , what do you think?

Screen Shot 2020-10-22 at 9 50 42 PM

@gjethwani
Copy link
Contributor

If you can provide it, it might also be useful to generate a json with failed data

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants