-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bump azure core dependency to mitigate woodstox CVE #15432
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
When refresh token is retrieved for UI, currently we were sending HTTP Status 303, assuming that all the request will just repeat the call on the Location header. When this works for GET/PUT verbs, it does not for non-idempotent ones like POST, as every js http client should do a GET on LOCATION after 303 on POST. Due to that I change it to 307, that should force every client to repeat exactly the same request, no matter the verb. Co-authored-by: s2lomon <s2lomon@gmail.com>
Actual work is done in `pageProjectWork.process()` call while `projection.project` only performs setup of projection. So both `expressionProfiler` and `metrics.recordProjectionTime` needed to be around that method.
Removes outdated comments and unnecessary methods in local exchange PartitioningExchanger since the operator is no longer implemented in a way that attempts to be thread-safe.
- Change ColumnHandle to BigQueryColumnHandle in BigQueryTableHandle - Extract buildColumnHandles in BigQueryClient
The new field allows the table function to declare during analysis which columns from the input tables are necessary to execute the function. The required columns can be then validated by the analyzer. This declaration can be also used by the optimizer to prune any input columns that are not used by the table function.
Change the way how DirectExchangeClient.scheduleRequestIfNecessary calculates the number of clients to be requested on the exchange phase to use an average request size of specific client instead of aggregated average of all clients.
According to https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3 scope parameter in the get token request is actually redundant as it was already provided in the authorization request. Refresh token request on the other hand should still provide it.
The Paruqet reader does not support pushdown on fields of a Row type. The checks in `IcebergPageSourceProvider#getParquetTupleDomain` used to prevent this, but they stopped working when dereference pushdown was implemented. If a row field had the same name as a top level column this would have resulted in a correctness issue.
Add syntax for defining how stale an MV can be and still be queryable.
Example: Fragment 2 [SOURCE] CPU: 618.74ms, Scheduled: 1.23s, Blocked 2.95s (Input: 0.00ns, Output: 0.00ns), Input: 6001215 rows (51.51MB); per task: avg.: 6001215.00 std.dev.: 0.00, Output: 3 row Output buffer active time: 5.66ms, buffer utilization distribution (%): {p01=0.00, p05=0.00, p10=0.00, p25=0.00, p50=0.00, p75=0.00, p90=0.00, p95=0.00, p99=0.00, max= Task output distribution: {count=1.00, p01=12.87MB, p05=12.87MB, p10=12.87MB, p25=12.87MB, p50=12.87MB, p75=12.87MB, p90=12.87MB, p95=12.87MB, p99=12.87MB, max=12.87MB Task input distribution: {count=1.00, p01=12.87MB, p05=12.87MB, p10=12.87MB, p25=12.87MB, p50=12.87MB, p75=12.87MB, p90=12.87MB, p95=12.87MB, p99=12.87MB, max=12.87MB}
Example: Fragment 1 [HASH] Amount of input data processed by the workers for this stage might be skewed
Please take a look at CI failure https://github.com/trinodb/trino/actions/runs/3712280909/jobs/6293641194. |
The build seems not green. I think this is the reason
|
@tomrijntjes there was apparently a force push to the master branch (per #15365 (comment)), so your PR appears as if containing some unrelated commits. After you do that, please make sure the CI passes. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Bump azure-core dependency to address woodstox CVE.
Additional context and related issues
Two plugins rely on azure-core:
Version 1.25 depends on Woodstox 6.2.6 which has a known CVE that showed up in our vulnerability scan. This CVE has been mitigated in azure-core 1.34 and its transitive dependency woodstox 6.4.0
Release notes
(x) This is not user-visible or docs only and no release notes are required.
( ) Release notes are required, please propose a release note for me.
( ) Release notes are required, with the following suggested text: