-
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark filesystems with lots of changes #4
Comments
We should 100% totally check this, but I'll be pretty surprised if it turned out to be the actual bottleneck. Here's my thinking:
Which isn't to say that trying to sync a new really deep structure won't have a lot of round trips, but each of those should be pretty lightweight because it's just pointers. That of course compounds, so if you need to do 10k round trips because of bitswap, yeah that's pretty rough. I can absolutely be wrong — the above is pure theory — and we should definitely test this empirically 🔬🧪👍 |
That sounds interesting! I'd love to hear more.
Yeah I had this issue in my notes and I talked to James about it. I just wanted this to be persisted somewhere. It's all theory of course. Maybe this would've been more appropriate at another place 🤔 |
With the known-effected account, I opened the public directory in Drive, and upload a ~400kb image. It took over a minute to complete.
I think that this a great place to record this, and thanks for writing it down! We absolutely need to check this assumption. My comments above were mainly 1. stating my assumptions, 2. getting the conversation going, and 3. when we test this, we should account for the factors in my (and others) existing assumptions, for example:
|
Ah right, I thought you reproduced this on a new account 😅 |
One of my accounts has lots of filesystem public files history entries. Maybe it might make sense to benchmark this. (Just wanted to write this down somewhere, this came up with of fission-codes/fission#489)
The text was updated successfully, but these errors were encountered: