Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update #6217

Merged
merged 8 commits into from
Feb 11, 2021
Merged

chore(deps): update #6217

merged 8 commits into from
Feb 11, 2021

Conversation

fanatid
Copy link
Contributor

@fanatid fanatid commented Jan 24, 2021

I also pinned async-graphql/async-graphql-warp because they add tokio:1.0 (#5175).

$ cargo update --aggressive
    Updating crates.io index
    Updating git repository `https://github.com/flavray/avro-rs`
    Updating git repository `https://github.com/hyperium/hyper`
    Updating git repository `https://github.com/bytecodealliance/lucet.git`
    Updating git repository `https://github.com/kyren/rlua`
    Updating git repository `https://github.com/tower-rs/tower`
    Updating git repository `https://github.com/tokio-rs/tracing`
    Updating backtrace v0.3.55 -> v0.3.56
    Updating bumpalo v3.4.0 -> v3.5.0
    Updating derivative v2.1.3 -> v2.2.0
    Updating filetime v0.2.13 -> v0.2.14
    Updating getrandom v0.2.1 -> v0.2.2
    Updating git2 v0.13.15 -> v0.13.16
    Updating libgit2-sys v0.12.17+1.1.0 -> v0.12.18+1.1.0
    Updating metrics v0.13.0 -> v0.13.1
    Updating nom v6.0.1 -> v6.1.0
    Updating object v0.22.0 -> v0.23.0
    Updating rpassword v5.0.0 -> v5.0.1
    Updating tower-service v0.3.0 -> v0.3.1

@fanatid fanatid added the domain: deps Anything related to Vector's dependencies label Jan 24, 2021
@fanatid fanatid added this to the 2021-01-18 Tabula E-Rasa milestone Jan 24, 2021
@fanatid fanatid self-assigned this Jan 24, 2021
@fanatid fanatid requested a review from lukesteensen as a code owner January 24, 2021 11:09
@fanatid
Copy link
Contributor Author

fanatid commented Jan 24, 2021

Fix for quanta crate: metrics-rs/quanta#38

@fanatid
Copy link
Contributor Author

fanatid commented Jan 24, 2021

Looks like metrics crates update cased regression for some benchmarks 😞

@binarylogic
Copy link
Contributor

@fanatid it didn't fail the check though, which tests are you concerned about? cc @jszwedko

@binarylogic binarylogic requested review from jszwedko and removed request for lukesteensen January 24, 2021 21:20
@fanatid
Copy link
Contributor Author

fanatid commented Jan 24, 2021

I think Jesse disable benchmark fails, but maybe I'm wrong.

Regressed benchmarks:

name                                                                  time       time change  throughput throughput change  p     change
metrics_snapshot/cardinality/1                                        576.44 ns  +7.1600%     unknown    unknown            0.00  regressed
metrics_snapshot/cardinality/10                                       4.0395 us  +11.885%     unknown    unknown            0.00  regressed
metrics_snapshot/cardinality/100                                      46.491 us  +15.303%     unknown    unknown            0.00  regressed
metrics_snapshot/cardinality/1000                                     516.62 us  +8.9227%     unknown    unknown            0.00  regressed
metrics_snapshot/cardinality/10000                                    5.9199 ms  +6.4559%     unknown    unknown            0.00  regressed
metrics_no_tracing_integration/micro/bare_counter                     114.60 ns  +38.066%     unknown    unknown            0.00  regressed
metrics_no_tracing_integration/micro/bare_counter_with_static_labels  154.95 ns  +46.875%     unknown    unknown            0.00  regressed
metrics_no_tracing_integration/micro/bare_counter_with_dynamic_labels 226.27 ns  +30.988%     unknown    unknown            0.00  regressed
metrics_no_tracing_integration/micro/ununsed_span                     117.14 ns  +37.755%     unknown    unknown            0.00  regressed
metrics_no_tracing_integration/micro/span_no_labels                   336.00 ns  +8.5848%     unknown    unknown            0.00  regressed
metrics_no_tracing_integration/micro/span_with_1_static_label         335.83 ns  +8.5668%     unknown    unknown            0.00  regressed
metrics_no_tracing_integration/micro/span_with_2_static_labels        342.69 ns  +10.114%     unknown    unknown            0.00  regressed
metrics_no_tracing_integration/micro/span_with_1_dynamic_label        336.16 ns  +8.1355%     unknown    unknown            0.00  regressed
metrics_no_tracing_integration/micro/span_with_2_dynamic_labels       337.05 ns  +8.5066%     unknown    unknown            0.00  regressed
metrics_off/micro/bare_counter                                        4.4220 ns  +15.388%     unknown    unknown            0.00  regressed
metrics_off/micro/bare_counter_with_static_labels                     4.4221 ns  +15.388%     unknown    unknown            0.00  regressed
metrics_off/micro/ununsed_span                                        4.7510 ns  +23.047%     unknown    unknown            0.00  regressed
metrics_on/micro/bare_counter                                         185.87 ns  +22.717%     unknown    unknown            0.00  regressed
metrics_on/micro/bare_counter_with_static_labels                      257.89 ns  +29.303%     unknown    unknown            0.00  regressed
metrics_on/micro/bare_counter_with_dynamic_labels                     295.98 ns  +23.696%     unknown    unknown            0.00  regressed
metrics_on/micro/ununsed_span                                         186.27 ns  +21.269%     unknown    unknown            0.00  regressed
metrics_on/micro/span_no_labels                                       656.60 ns  +6.4539%     unknown    unknown            0.00  regressed
metrics_on/micro/span_with_1_static_label                             672.02 ns  +6.5157%     unknown    unknown            0.00  regressed
metrics_on/micro/span_with_2_static_labels                            665.85 ns  +6.2272%     unknown    unknown            0.00  regressed
metrics_on/micro/span_with_1_dynamic_label                            675.26 ns  +6.9931%     unknown    unknown            0.00  regressed

@binarylogic
Copy link
Contributor

cc @MOZGIII on the regression since you were working on performance in that crate.

@MOZGIII
Copy link
Contributor

MOZGIII commented Jan 25, 2021

The results suggest that the system performance is unstable.
This might not be a real regression.

The benches are written in such a way that they're supposed to highlight overhead from certain operations, and in this case the regression can be observed in the benches that are 100% unaffected by the metrics crate - metrics_off/micro/ununsed_span.

I'd prefer to get a second opinion on this, but I think it's the system instability that caused this numbers.

I'll double-check the source to reassure.

@fanatid
Copy link
Contributor Author

fanatid commented Jan 30, 2021

Week update.

$ cargo update --aggressive
    Updating git repository `https://github.com/flavray/avro-rs`
    Updating git repository `https://github.com/hyperium/hyper`
    Updating crates.io index
    Updating git repository `https://github.com/bytecodealliance/lucet.git`
    Updating git repository `https://github.com/kyren/rlua`
    Updating git repository `https://github.com/tower-rs/tower`
    Updating git repository `https://github.com/tokio-rs/tracing`
    Updating bumpalo v3.5.0 -> v3.6.0
    Updating criterion v0.3.3 -> v0.3.4
    Updating data-encoding v2.3.1 -> v2.3.2
    Updating flate2 v1.0.19 -> v1.0.20
    Updating git2 v0.13.16 -> v0.13.17
    Updating js-sys v0.3.46 -> v0.3.47
    Updating libc v0.2.82 -> v0.2.84
    Updating log v0.4.13 -> v0.4.14
    Updating plotters v0.2.15 -> v0.3.0
      Adding plotters-backend v0.3.0
      Adding plotters-svg v0.3.0
    Updating predicates v1.0.6 -> v1.0.7
    Updating predicates-core v1.0.1 -> v1.0.2
    Updating predicates-tree v1.0.1 -> v1.0.2
    Updating quanta v0.7.1 -> v0.7.2
    Updating rand v0.8.2 -> v0.8.3
      Adding raw-cpuid v9.0.0
    Updating serde v1.0.121 -> v1.0.123
    Updating serde_derive v1.0.121 -> v1.0.123
    Updating serde_with v1.6.0 -> v1.6.1
    Updating syn v1.0.59 -> v1.0.60
    Updating time v0.2.24 -> v0.2.25
    Updating tinyvec v1.1.0 -> v1.1.1
    Updating tokio v0.2.24 -> v0.2.25
    Updating typetag v0.1.6 -> v0.1.7
    Updating typetag-impl v0.1.6 -> v0.1.7
    Updating wasm-bindgen v0.2.69 -> v0.2.70
    Updating wasm-bindgen-backend v0.2.69 -> v0.2.70
    Updating wasm-bindgen-futures v0.4.19 -> v0.4.20
    Updating wasm-bindgen-macro v0.2.69 -> v0.2.70
    Updating wasm-bindgen-macro-support v0.2.69 -> v0.2.70
    Updating wasm-bindgen-shared v0.2.69 -> v0.2.70
    Updating web-sys v0.3.46 -> v0.3.47

@jszwedko
Copy link
Member

jszwedko commented Feb 1, 2021

I'm not sure if it is related to the metrics crate update, but there does appear to be a reproducible regression of even the metrics_off benchmarks:

ubuntu@ip-10-0-101-115:~/vector$ critcmp master bump-deps-weekly -t 5
group                                           bump-deps-weekly                       master
-----                                           ----------------                       ------
metrics_off/micro/span_no_labels                1.11    246.3±0.83ns        ? B/sec    1.00    221.0±0.73ns        ? B/sec
metrics_off/micro/span_with_1_dynamic_label     1.05    233.7±0.92ns        ? B/sec    1.00    221.6±1.00ns        ? B/sec
metrics_off/micro/span_with_2_dynamic_labels    1.05    235.5±1.04ns        ? B/sec    1.00    224.1±1.01ns        ? B/sec
metrics_off/micro/span_with_2_static_labels     1.05    232.3±1.18ns        ? B/sec    1.00    221.2±1.09ns        ? B/sec
metrics_off/micro/ununsed_span                  1.00      4.2±0.07ns        ? B/sec    1.07      4.5±0.07ns        ? B/sec

I'm running this on of the benchmark runners to reduce noise. Simply running the master benchmarks twice, for example, does not show regressions as might be expected if they were noisy.

I'll try to run the full suite.

@jszwedko
Copy link
Member

jszwedko commented Feb 1, 2021

To answer the question about the CI run, we haven't yet enabled failing that workflow when regressions are detected. I wanted to collect some data about how often we see false positives with it now that it is mostly stable.

@jszwedko
Copy link
Member

jszwedko commented Feb 1, 2021

Or rather, we have the results from the last CI run. The changed benchmarks:

2021-01-30T20:10:12.6057630Z lua_add_fields/v2                                                                                    4.1648 us  +5.8258%     240.11 Kelem/s  -5.5050%           0.00  regressed
2021-01-30T20:10:12.6062352Z metrics_snapshot/cardinality/1                                                                       600.74 ns  +10.295%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6063311Z metrics_snapshot/cardinality/10                                                                      4.1981 us  +12.337%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6064255Z metrics_snapshot/cardinality/100                                                                     48.090 us  +14.716%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6065210Z metrics_snapshot/cardinality/1000                                                                    538.08 us  +11.957%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6066406Z metrics_snapshot/cardinality/10000                                                                   6.1718 ms  +10.088%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6090706Z metrics_no_tracing_integration/micro/bare_counter                                                    114.26 ns  +37.424%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6091534Z metrics_no_tracing_integration/micro/bare_counter_with_static_labels                                 156.41 ns  +47.944%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6092397Z metrics_no_tracing_integration/micro/bare_counter_with_dynamic_labels                                221.22 ns  +31.629%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6093215Z metrics_no_tracing_integration/micro/ununsed_span                                                    117.27 ns  +37.717%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6094817Z metrics_no_tracing_integration/micro/span_no_labels                                                  339.90 ns  +12.572%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6095608Z metrics_no_tracing_integration/micro/span_with_1_static_label                                        340.20 ns  +11.748%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6096424Z metrics_no_tracing_integration/micro/span_with_2_static_labels                                       340.88 ns  +12.419%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6097238Z metrics_no_tracing_integration/micro/span_with_1_dynamic_label                                       340.44 ns  +12.202%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6098055Z metrics_no_tracing_integration/micro/span_with_2_dynamic_labels                                      342.85 ns  +13.432%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6103326Z metrics_off/micro/bare_counter                                                                       3.5369 ns  -7.6900%     unknown         unknown            0.00  improved
2021-01-30T20:10:12.6104307Z metrics_off/micro/bare_counter_with_static_labels                                                    3.2422 ns  -18.114%     unknown         unknown            0.00  improved
2021-01-30T20:10:12.6106207Z metrics_off/micro/ununsed_span                                                                       4.1419 ns  +7.3728%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6115579Z metrics_on/micro/bare_counter                                                                        181.95 ns  +22.026%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6116288Z metrics_on/micro/bare_counter_with_static_labels                                                     257.48 ns  +29.234%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6117047Z metrics_on/micro/bare_counter_with_dynamic_labels                                                    291.58 ns  +26.951%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6117746Z metrics_on/micro/ununsed_span                                                                        183.58 ns  +21.559%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6119177Z metrics_on/micro/span_no_labels                                                                      658.55 ns  +6.0239%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6119850Z metrics_on/micro/span_with_1_static_label                                                            667.81 ns  +5.5109%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6120601Z metrics_on/micro/span_with_2_static_labels                                                           671.51 ns  +5.9621%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6121311Z metrics_on/micro/span_with_1_dynamic_label                                                           667.83 ns  +5.5070%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6122015Z metrics_on/micro/span_with_2_dynamic_labels                                                          673.22 ns  +6.7549%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6125711Z remap: coerce with remap                                                                             4.3554 us  +2.2846%     unknown         unknown            0.00  regressed
2021-01-30T20:10:12.6127198Z upcase: literal_value                                                                                181.97 ns  +4.4121%     unknown         unknown            0.00  regressed

It seems like the bulk are in the metrics subsystem micro-benches. I think it is worth looking at them more closely; maybe starting with the changelog for the metrics crate.

@binarylogic
Copy link
Contributor

Can we remove the metrics upgrade and proceed with upgrading the rest of the dependencies?

Cargo.toml Outdated
dashmap = "3"
dashmap = "4"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ISTR there was a regression issue with dashmap 4, though I don't recall the details. Perhaps relevant to the metrics performance regression?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is exactly my thinking as well - dashmap 4 has a different and slower implementation, and I suspect this to be the cause of the regression.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Downgrade dashmap back to 3.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fanatid do you think this was the cause of the regression you noted here? If so, we might be able to upgrade the metrics dependency?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it does not help. I downgraded metrics crates in addition and waiting benchmark for confirmation that everything is ok.

@fanatid
Copy link
Contributor Author

fanatid commented Feb 9, 2021

Still regression for some metrics benchmarks:

metrics_snapshot/cardinality/1        587.60 ns  +15.338%     unknown         unknown            0.00  regressed
metrics_snapshot/cardinality/10       4.1750 us  +21.560%     unknown         unknown            0.00  regressed
metrics_snapshot/cardinality/100      47.598 us  +30.240%     unknown         unknown            0.00  regressed
metrics_snapshot/cardinality/1000     544.69 us  +20.308%     unknown         unknown            0.00  regressed
metrics_snapshot/cardinality/10000    6.9082 ms  +27.867%     unknown         unknown            0.00  regressed

@binarylogic
Copy link
Contributor

To clarify, removing the metrics and dashmap upgrade we're still seeing regression? If so, do you have any other theories?

@fanatid
Copy link
Contributor Author

fanatid commented Feb 9, 2021

Yes, that's correct, I removed metrics & dashmap update and still see regression. I do not have any ideas which package cause this =( will try to find it tomorrow =\

Signed-off-by: Kirill Fomichev <fanatid@ya.ru>
Signed-off-by: Kirill Fomichev <fanatid@ya.ru>
Signed-off-by: Kirill Fomichev <fanatid@ya.ru>
Signed-off-by: Kirill Fomichev <fanatid@ya.ru>
Signed-off-by: Kirill Fomichev <fanatid@ya.ru>
@fanatid
Copy link
Contributor Author

fanatid commented Feb 10, 2021

I pinned metrics crates, no regression now. CI job: https://github.com/timberio/vector/runs/1869218472?check_suite_focus=true

Updated crates (in Cargo.toml):

  • async-graphql: 2.4.10 => 2.5.0
  • async-graphql-warp: 2.4.10 => 2.5.0
  • avro-rs: 0.12.0 (git) => 0.13.0

cargo update --aggressive:

$ cargo update --aggressive
    Updating git repository `https://github.com/hyperium/hyper`
    Updating crates.io index
    Updating git repository `https://github.com/bytecodealliance/lucet.git`
    Updating git repository `https://github.com/kyren/rlua`
    Updating git repository `https://github.com/tower-rs/tower`
    Updating git repository `https://github.com/tokio-rs/tracing`
    Updating assert_cmd v1.0.2 -> v1.0.3
    Updating async-process v1.0.1 -> v1.0.2
    Updating backtrace v0.3.55 -> v0.3.56
    Updating bstr v0.2.14 -> v0.2.15
    Updating bumpalo v3.4.0 -> v3.6.0
    Updating ctor v0.1.18 -> v0.1.19
    Updating data-encoding v2.3.1 -> v2.3.2
    Updating derivative v2.1.3 -> v2.2.0
    Updating duct v0.13.4 -> v0.13.5
    Updating encoding_rs v0.8.26 -> v0.8.28
    Updating filetime v0.2.13 -> v0.2.14
    Updating flate2 v1.0.19 -> v1.0.20
    Updating getrandom v0.2.1 -> v0.2.2
    Updating git2 v0.13.15 -> v0.13.17
    Updating httparse v1.3.4 -> v1.3.5
    Updating idna v0.2.0 -> v0.2.1
    Updating js-sys v0.3.46 -> v0.3.47
    Updating libc v0.2.82 -> v0.2.86
    Updating libgit2-sys v0.12.17+1.1.0 -> v0.12.18+1.1.0
    Updating log v0.4.13 -> v0.4.14
    Updating lru v0.6.3 -> v0.6.4
    Updating nom v6.0.1 -> v6.1.0
    Updating object v0.22.0 -> v0.23.0
    Updating pin-project v1.0.4 -> v1.0.5
    Updating pin-project-internal v1.0.4 -> v1.0.5
    Updating predicates v1.0.6 -> v1.0.7
    Updating predicates-core v1.0.1 -> v1.0.2
    Updating predicates-tree v1.0.1 -> v1.0.2
    Updating rand v0.8.2 -> v0.8.3
    Updating ring v0.16.19 -> v0.16.20
    Updating rpassword v5.0.0 -> v5.0.1
    Updating rust_decimal v1.9.0 -> v1.10.2
    Updating serde v1.0.119 -> v1.0.123
    Updating serde_derive v1.0.119 -> v1.0.123
    Updating serde_json v1.0.61 -> v1.0.62
    Updating serde_with v1.6.0 -> v1.6.2
    Updating serde_yaml v0.8.15 -> v0.8.16
    Updating sha-1 v0.9.2 -> v0.9.3
    Updating sha2 v0.9.2 -> v0.9.3
      Adding signal-hook v0.3.4
    Updating snap v1.0.3 -> v1.0.4
    Updating standback v0.2.14 -> v0.2.15
    Updating syn v1.0.58 -> v1.0.60
    Updating time v0.2.24 -> v0.2.25
    Updating tinyvec v1.1.0 -> v1.1.1
    Updating tokio v0.2.24 -> v0.2.25
    Updating tower-service v0.3.0 -> v0.3.1
    Updating tracing v0.1.22 -> v0.1.23
    Updating tracing-attributes v0.1.11 -> v0.1.12
    Updating typetag v0.1.6 -> v0.1.7
    Updating typetag-impl v0.1.6 -> v0.1.7
    Updating unicode-normalization v0.1.16 -> v0.1.17
    Updating wasm-bindgen v0.2.69 -> v0.2.70
    Updating wasm-bindgen-backend v0.2.69 -> v0.2.70
    Updating wasm-bindgen-futures v0.4.19 -> v0.4.20
    Updating wasm-bindgen-macro v0.2.69 -> v0.2.70
    Updating wasm-bindgen-macro-support v0.2.69 -> v0.2.70
    Updating wasm-bindgen-shared v0.2.69 -> v0.2.70
    Updating web-sys v0.3.46 -> v0.3.47

Copy link
Contributor

@pablosichert pablosichert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, looks good!

@fanatid fanatid merged commit 44fb0d3 into master Feb 11, 2021
@fanatid fanatid deleted the bump-deps-weekly branch February 11, 2021 06:37
jszwedko added a commit that referenced this pull request Mar 1, 2021
It was pinned in #6217 because it
was believed it required Tokio 1.0 but I'm not actually seeing this.

Signed-off-by: Jesse Szwedko <jesse@szwedko.me>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
domain: deps Anything related to Vector's dependencies
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants