-
Notifications
You must be signed in to change notification settings - Fork 13k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce a table used for Debug
impl of str
.
#40709
Conversation
r? @sfackler (rust_highfive has picked a reviewer for you, use r? to override) |
17466fa
to
c4fc592
Compare
I forgot to remove a test code (and run tidy, I didn't know tidy does not run with |
@bors: r+ Thanks @lifthrasiir! |
📌 Commit c4fc592 has been approved by |
🔒 Merge conflict |
This commit shrinks the size of the aforementioned table from 2,102 bytes to 1,197 bytes. This is achieved by an observation that most u16 entries are common in its upper byte. Specifically: - SINGLETONS now uses two tables, one for (upper byte, lower count) and another for a series of lower bytes. For each upper byte given number of lower bytes are read and compared. - NORMAL now uses a variable length format for the count of "true" codepoints and "false" codepoints (one byte with MSB unset, or two big-endian bytes with the first MSB set). The code size and relative performance roughly remains same as this commit tries to optimize for both. The new table and algorithm has been verified for the equivalence to older ones.
c4fc592
to
44bcd26
Compare
@bors: r+ |
📌 Commit 44bcd26 has been approved by |
…r, r=alexcrichton Reduce a table used for `Debug` impl of `str`. This commit shrinks the size of the aforementioned table from 2,102 bytes to 1,197 bytes. This is achieved by an observation that most `u16` entries are common in its upper byte. Specifically: - `SINGLETONS` now uses two tables, one for (upper byte, lower count) and another for a series of lower bytes. For each upper byte given number of lower bytes are read and compared. - `NORMAL` now uses a variable length format for the count of "true" codepoints and "false" codepoints (one byte with MSB unset, or two big-endian bytes with the first MSB set). The code size and relative performance roughly remains same as this commit tries to optimize for both. The new table and algorithm has been verified for the equivalence to older ones. In my x86-64 macOS laptop with `rustc 1.17.0-nightly (0aeb9c1 2017-03-15)`, `-C opt-level=3 -C lto` gives the following: * The old routine compiles to 2,102 bytes of data and 416 bytes of code. * The new routine compiles to 1,197 bytes of data and 448 bytes of code. Counting a number of all printable Unicode scalar values (128,003, if you wonder) by filtering `0..0x110000` with `std::char::from_u32` and `is_printable` took 50±7ms for both. This can be surprising as the new routine *has* to do more calculations; this is partly explained by the fact that a linear search of `SINGLETONS` has been replaced by *two* linear searches for upper and lower bytes, which greatly reduces the iteration count.
…r, r=alexcrichton Reduce a table used for `Debug` impl of `str`. This commit shrinks the size of the aforementioned table from 2,102 bytes to 1,197 bytes. This is achieved by an observation that most `u16` entries are common in its upper byte. Specifically: - `SINGLETONS` now uses two tables, one for (upper byte, lower count) and another for a series of lower bytes. For each upper byte given number of lower bytes are read and compared. - `NORMAL` now uses a variable length format for the count of "true" codepoints and "false" codepoints (one byte with MSB unset, or two big-endian bytes with the first MSB set). The code size and relative performance roughly remains same as this commit tries to optimize for both. The new table and algorithm has been verified for the equivalence to older ones. In my x86-64 macOS laptop with `rustc 1.17.0-nightly (0aeb9c1 2017-03-15)`, `-C opt-level=3 -C lto` gives the following: * The old routine compiles to 2,102 bytes of data and 416 bytes of code. * The new routine compiles to 1,197 bytes of data and 448 bytes of code. Counting a number of all printable Unicode scalar values (128,003, if you wonder) by filtering `0..0x110000` with `std::char::from_u32` and `is_printable` took 50±7ms for both. This can be surprising as the new routine *has* to do more calculations; this is partly explained by the fact that a linear search of `SINGLETONS` has been replaced by *two* linear searches for upper and lower bytes, which greatly reduces the iteration count.
⌛ Testing commit 44bcd26 with merge a1ef100... |
💔 Test failed - status-appveyor |
This commit shrinks the size of the aforementioned table from 2,102 bytes to 1,197 bytes. This is achieved by an observation that most
u16
entries are common in its upper byte. Specifically:SINGLETONS
now uses two tables, one for (upper byte, lower count) and another for a series of lower bytes. For each upper byte given number of lower bytes are read and compared.NORMAL
now uses a variable length format for the count of "true" codepoints and "false" codepoints (one byte with MSB unset, or two big-endian bytes with the first MSB set).The code size and relative performance roughly remains same as this commit tries to optimize for both. The new table and algorithm has been verified for the equivalence to older ones.
Some useless benchmarks
In my x86-64 macOS laptop with
rustc 1.17.0-nightly (0aeb9c129 2017-03-15)
,-C opt-level=3 -C lto
gives the following:Counting a number of all printable Unicode scalar values (128,003, if you wonder) by filtering
0..0x110000
withstd::char::from_u32
andis_printable
took 50±7ms for both. This can be surprising as the new routine has to do more calculations; this is partly explained by the fact that a linear search ofSINGLETONS
has been replaced by two linear searches for upper and lower bytes, which greatly reduces the iteration count.