Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BEP-56: Data compression extension #125

Open
wants to merge 12 commits into
base: master
Choose a base branch
from
Open

Conversation

Saiv46
Copy link

@Saiv46 Saiv46 commented Oct 1, 2021

When Bittorrent was created, compression algorithms was slow and expensive, so user must have share uncompressed files or compress it manually to ZIP/RAR/etc.

Now people still share files in .zip archives (or even in .rar, yikes!), which takes additional space and needs to be uncompressed to somewhere. Instead, why shouldn't we compress torrent pieces on-the-fly?

We have fast compression algorithms like LZO, LZ4, Snappy and Zstandard, which allows torrenting uncompressed files directly, yet not sacrifing upload speed like if it was pre-compressed.

Discuss here: #124

@Saiv46 Saiv46 changed the title BEP-56 draft (compression) BEP-56: Data compression extension Feb 21, 2024
@Saiv46 Saiv46 marked this pull request as ready for review February 21, 2024 10:19
Copy link
Contributor

@arvidn arvidn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems to me, that a much simpler approach would be to specify a new extension message, say PIECE_ZSTD and assign it a message ID in the m dictionary of the BEP10 handshake.

This would tell the other peer that it may send responses to REQUEST messages with a compressed buffer. You could add more extension messages as well, like PIECE_LZ4 etc. It's then up to the uploader to pick whichever favourite compression algorithm it supports and send blocks compressed by it.

One important distinction is that the bytes included in the PIECE message are compressed, not whole pieces. Compressing pieces raises more questions that needs to be addressed both in the specification and in the implementation.

As for stream compression, you could have another extension message called, START_COMPRESSION_ZSTD. When received, every byte following this message is compressed with zstd. Also other messages for for other compression algorithms.


Compression algorithms must satisfy the following requirements:

1. Decompression speed must not be lower than 500 MB/s.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this doesn't really mean anything unless you specify the hardware you run it on

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally agree. I used data from Silesia compression corpus and forgot to include reference hardware.

decompressed when saving to disk or sending to peer, which not supports
compression. To reduce piece re-compression, client should raise
current algorithm's priority during handshake. This method has low
efficiency with pieces smaller than 4 MB.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a lot of details omitted here. This needs to fit into the way blocks are requested and sent according to the protocol, see http://bittorrent.org/beps/bep_0003.html

Crucially, when you say the whole piece is compressed, do you mean that I have to request all blocks for that piece from the same peer, in order to decompress any part of it?

The offset and size that's specified in the request message, is the referring to the uncompressed piece (as it does in the current protocol) or does it refer to the compressed piece? The requestor would need to know the compressed size of each piece in that case, which there doesn't seem to be a mechanism to learn.

It seems far more practical to introduce a new PIECE message which indicates which compression algorithm it's using, leaving everything else the same. But that would require compressing each block individually, and maybe even smaller and unaligned parts of pieces. You don't have to request blocks at 16 kiB alignments.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, I should have introduced CPIECE message in the first place.


1. Decompression speed must not be lower than 500 MB/s.

2. It must not produce a larger piece than the original by 1%.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so there must be an option for the sending side to send a block uncompressed, even if it was requested as compressed then, right?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was a short requirement list for compression algorithm candidates of specification.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed requirement list altogether for now.


The compression algorithm is selected by taking the dictionary item with
highest priority from intersection of items supported by both peers,
if there isn't any suitable compression algorithm - compression will be disabled.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like an unnecessary requirement that the same algorithm is used in both directions. It also seems like it would complicate things.

The fact that there's no message to ensure the clients agree on which algorithm is used seems risky. You don't specify how to resolve ambiguities. There may be 2 algorithms that are equally good options.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have taken the assumption that the same algorithm is used in both directions to simplify the negotiation process. Once both clients shared dictionaries, no further messages are required. It's unlikely that two algorithms would have the same priority on two different clients, but I should have explained it more clearly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have taken the assumption that the same algorithm is used in both directions to simplify the negotiation process.

you're making it more complicated by introducing negotiation in the first place.

It's unlikely that two algorithms would have the same priority on two different clients

Unlikely things happen all the time, especially when you have ~100 million peers.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're making it more complicated by introducing negotiation in the first place.

Yeah, but that's necessary due to clients disabling/implementing various algorithms.

It's unlikely that two algorithms would have the same priority on two different clients

Unlikely things happen all the time, especially when you have ~100 million peers.

Tried to resolve this by taking TLS approach, now in crequest client will enumerate what algorithms it's capable of, and then having other client to respond in cresponse with selected algorithms to send and receive.

specified otherwise.

**NOTE**: Currently, only ``p_zstd`` and ``s_zstd`` algorithms
are required for implementation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the point of requiring this? Would it be a problem if the negotiations resulted in an empty set of algorithms and normal protocol was used?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There were concerns about different clients supporting non-overlapping sets of algorithms, so specification should require one that must be implemented universally. There wouldn't be a problem if negotiations resulted in an empty set, as the compression feature could be disabled by the user.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is an extension to begin with. There will be clients not implementing it. I don't see a problem with that.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, algorithm list must be reworked, as there can be additional options.
Until it's necessary, I removed the note.

clients should lower or raise algorithm's priority depending on expected
factors that could impact compression efficiency and performance. This
method can introduce performance issues if used on thousands of
simultaneous connections.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you synchronize which byte to start stream compression at?
Sending messages is asynchronous, in both directions. By the time I receive this handshake, I may have already sent other messages. In fact, I'm quite likely to.

I think you would need a message indicating that everything past it is compressed, and you probably ought to include which compression algorithm you picked in this message as well.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done with cresponse message

@Saiv46 Saiv46 requested a review from arvidn February 23, 2024 06:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants