-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aggregating concurrent trace results #4
Comments
Good question! I'm working on a paper at the moment dealing with this problem in a general way. If performance isn't a big issue (and for converting traces, its no big deal) then there's a relatively simple algorithm for doing this conversion involving essentially replaying the entire network of CRDT peers on a single machine via That said, I think this code won't run cleanly at the moment because I made some changes to the editing trace file format. I might publish an example conversion script in javascript using automerge or something. I think I have the same algorithm implemented in js kicking around somewhere. If you have a CRDT implementation which allows instances to be cloned and merged, then it should be relatively straightforward to retrofit something. On the topic of testing data, I'd love some more editing traces if you're keen to make some more! I just noticed your json-crdt-traces repo. |
I still have this on my radar, planning to contribute soon the plain text editing traces, which hopefully Operation Tracker is collecting as I am writing. Besides the plain text sequential traces, soon the |
@josephg I assume it is not the Fugue paper, but a new one? Is there a place one can subscribe to your publications? |
Yeah, new paper. Flick me your email address and I can email you an early draft in a few weeks. |
@josephg vadimsdaleckis at Gmail |
The sequential traces are much easier to execute and hence create benchmarks where different libraries are compared.
However, for the concurrent traces, first the trace needs to be converted to the native format of each library, which is considerable effort. I was wondering what would be the best way to compare concurrent editing performances?
The text was updated successfully, but these errors were encountered: