forked from elastic/elasticsearch
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pipeline registry #3
Open
afoucret
wants to merge
173
commits into
events-intake-branch
Choose a base branch
from
pipeline-registry
base: events-intake-branch
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
When the credentials fails to verify, the error message does not tell which API key fails. This makes it hard for users to fix the issue. This PR adds the API key ID to the error message to help with the situation.
…ngTo100ms (elastic#95018) This change enables allocation trace logging to be able to debug ocasional CI test failures.
…version (elastic#92823)" (elastic#95016) This reverts commit 8d60562. Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
This PR avoids an extra de-serialization of role descriptors received in a cross cluster access request, by pushing the validation down to the role building step (where we necessarily de-serialize the received role descriptors). This also has the effect that we return a `400` instead of a `401`. I could wrap the exception so that we return a `403` instead, but I think a `400` makes the most sense, since we received a bad payload. Currently, this failure is _not_ audited. I can add logic to detect it in [`authorize()`](https://github.com/elastic/elasticsearch/blob/b17dfc77b9c48313921aaafa9a9e3da3e2739fd8/x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/authz/AuthorizationService.java#L317) and emit an audit event, in a follow up, or in this PR. Just didn't want that to block review across time-zones.
* Update privileges.asciidoc with adding some privileges description SF Case - 01341015 Requested to update the cluster privileges which isn't explained https://www.elastic.co/guide/en/elasticsearch/reference/master/security-privileges.html. I typed descriptions but it probably needs to be corrected. - manage_autoscaling - manage_data_frame_transforms - manage_enrich * Update x-pack/docs/en/security/authorization/privileges.asciidoc Co-authored-by: Yang Wang <yang.wang@elastic.co> * Update x-pack/docs/en/security/authorization/privileges.asciidoc Co-authored-by: Yang Wang <yang.wang@elastic.co> * Apply review suggestion --------- Co-authored-by: Yang Wang <yang.wang@elastic.co> Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>
…5034) We don't use this constructor parameter any more, so this commit removes it.
* Update release notes to include 8.7.0 Release notes and migration guide from 8.7.0 release ported into main as well as re-generating 8.8.0 release notes. This latter step will be overwritten anyway, multiple times, by more up-to-date regeneration of the 8.8.0 release notes during the release process. * Remove coming 8.7.0 line * Update docs/reference/migration/migrate_8_7.asciidoc Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co> * Make same change to 8.8 --------- Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>
In elastic#94325 we introduced another forking step when submitting a publication, so we must extend the timeout in this test (and `DEFAULT_CLUSTER_STATE_UPDATE_DELAY`) by `DEFAULT_DELAY_VARIABILITY`. Closes elastic#94905
A handful of small changes to make the logging output of `CoordinatorTests` even more deterministic, for easier diffing. Relates elastic#94946
…c#95033) These objects take a route to/from their wire representation via an array, which is unnecessary. It's the same bytes on the wire as a list, so we can just use a list instead. Co-authored-by: Ievgen Degtiarenko <ievgen.degtiarenko@gmail.com>
Fixing off by one bug when seeking to first byte in the next page.
…stic#94517) This changes the serialization format for queries - when the index version is >=8.8.0, it serializes the actual transport version used into the stream. For BwC with old query formats, it uses the mapped TransportVersion for the index version. This can be modified later if needed to re-interpret the vint used to store TransportVersion to something else, allowing the format to be further modified if necessary.
If all the reads before the final one happen via `compareAndExchangeRegister` then the final one might find `firstRegisterRead` to be set still, permitting it to fail. This commit treats calls to `compareAndExchangeRegister` as reads too, avoiding this problem. Closes elastic#94664
This test is supposed to trigger a failure by exposing a spurious value for the register, but sometimes it exposes `expectedMax` which is what we expect at the end of the register checks. With this commit we ensure that we don't inadvertently return a correct value. Closes elastic#94410
This change sets the stability of ent-search APIs to beta and visibility to public. It also removes the feature flag link since enabling the module is not considered as a feature flag and the module is enabled by default.
Note that we use the encoding as follows: * for values taking [33, 40] bits per value encode using 40 bits per value * for values taking [41, 48] bits per value encode using 48 bits per value * for values taking [49, 56] bits per value encode using 56 bits per value This is an improvement over the encoding used by ForUtils that does not apply any compression for values taking more than 32 bits per value. Note that 40, 48 and 56 bits per value represent exact multiples of bytes (40 bits per value = 5 bytes, 48 bits per value = 6 bytes and 56 bits per value = 7 bytes). As a result we always write values using 3, 2 or 1 byte less than the 8 bytes required for a long value. We also apply compression to gauge metrics under the assumption that compressing values taking more than 32 bits per value works well for floating point values, because of the way floating point values are represented (IEEE 754 format).
* Remove extra step in manual downsampling docs * create -> view
* Enhanced REST tests for geo and cartesian centroid Coverage increased to cover cases for: * centroid over points * centroid over shapes * centroid over points with filter * centroid over shapes with filter * centroid over points with grouping * centroid over shapes with grouping * centroid over shapes with grouping and filter The last one was not done for points because the purpose of that test was primarily to validate the shape rules where centroids over GEOMETRYCOLLECTION would use only the highest dimensionality geometries for centroid calculation. * Enforce single shard So reduce risk of flakiness in aggregating over multiple documents
…rk (elastic#95048) There's no reason to prefix ops with their size over the network. We can verify the checksum once after reading each op and just streaming write ops.
When parsing role descriptors, we ensure that the FieldPermissions (`"field_security":{ "grant":[ ... ], "except":[ ... ] }`) are valid - that is that any patterns compile correctly, and the "except" is a subset of the "grant". However, the previous implementation would not use the FieldPermissionsCache for this, so it would compile (union, intersect & minimize) automatons every time a role was parsed. This was particularly an issue when parsing roles (from the security index) in the GET /_security/role/ endpoint. If there were a large number of roles with field level security the automaton parsing could have significant impact on the performance of this API.
Pushes the chunking of `GET _nodes/stats` down to avoid creating unboundedly large chunks. With this commit we yield one chunk per shard (if `?level=shards`) or index (if `?level=indices`) and per HTTP client and per transport action. Closes elastic#93985
We have moved away from considering terminate_after a filtered collector when collecting hits, as we already did not when size is set to 0. That means that we may shortcut the total hit count when terminate_after is used, and that makes us return total hit count that are retrieved from the index statistics, that is not early terminated, despite the actual collection of hits does early terminate. The corresponding test needs to be updated based on the new expectations. Closes elastic#94912
All the length implementations are the same so we can dry this up which might provide a speedup here and there since it gets us down to only two possible implementations of `BytesReference.length()` (releasable and normal ref) which should inline in most places.
This adds a QL utility method that parses an IP address into a BytesRef object.
This extracts a `CIDRUtils#isInRange()` function that will take as argument an IP given directly as bytes array and a single CIDR string specification. This allows code that has the IP to check already parsed in bytes (like being stored as BytesRef) to use it directly and avoid the dip-conversion to string, in order to call the existing `isInRange()`.
This reverts commit 059bfd4.
…tic#95271) Adds a new include flag definition_status to the GET trained models API. When present the trained model configuration returned in the response will have the new boolean field fully_defined if the full model definition is exists.
…ST handlers (elastic#94037) elastic#93607 added the ability to run Elasticsearch in "Serverless" mode, where access to REST endpoints could be restricted so that the full Elasticsearch API is not available (since a lot of it does not make sense in Servlerless). By default no endpoints are available, but they can be exposed with `ServerlessScope` annotations. This PR follows up on elastic#93607 by adding PUBLIC and INTERNAL annotations to the rest handlers owned by the Core Infra team. There are several rest endpoints still under discussion. This PR does not label those, so they remain unavailable in Serverless mode.
* It adds the profiling index pattern profiling-* to the fleet server service privileges. * And adds profiling-* to kibana system role privileges. --------- Co-authored-by: Daniel Mitterdorfer <daniel.mitterdorfer@elastic.co>
a3ba148
to
6233ed5
Compare
6233ed5
to
4d39402
Compare
afoucret
pushed a commit
that referenced
this pull request
Jan 24, 2025
…uginFuncTest builds distribution from branches via archives extractedAssemble [bwcDistVersion: 8.1.3, bwcProject: bugfix2, expectedAssembleTaskName: extractedAssemble, #3] elastic#119261
afoucret
pushed a commit
that referenced
this pull request
Jan 24, 2025
…uginFuncTest builds distribution from branches via archives extractedAssemble [bwcDistVersion: 8.1.3, bwcProject: bugfix2, expectedAssembleTaskName: extractedAssemble, #3] elastic#119261
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Still a WIP.
PipelineRegistry
that can be used to manage ingest pipelinesAnalyticsIngestPipelineRegistry