Skip to content

Commit

Permalink
Merge pull request #254 from elastic/main
Browse files Browse the repository at this point in the history
🤖 ESQL: Merge upstream
  • Loading branch information
elasticsearchmachine authored Sep 27, 2022
2 parents d614e3c + 7729972 commit f44c1b1
Show file tree
Hide file tree
Showing 112 changed files with 2,858 additions and 410 deletions.
4 changes: 3 additions & 1 deletion TESTING.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -107,18 +107,20 @@ password: `elastic-password`.
- In order to set a different keystore password: `--keystore-password`
- In order to set an Elasticsearch setting, provide a setting with the following prefix: `-Dtests.es.`
- In order to pass a JVM setting, e.g. to disable assertions: `-Dtests.jvm.argline="-da"`
- In order to use HTTPS: ./gradlew run --https

==== Customizing the test cluster for ./gradlew run

You may need to customize the cluster configuration for the ./gradlew run task.
The settings can be set via the command line, but other options require updating the task itself.
You can simply find the task in the source code and configure it there.
(The task is currently defined in build-tools-internal/src/main/groovy/elasticsearch.run.gradle)
However, this requires modifying a source controlled file and is subject to accidental commits.
Alternatively, you can use a Gradle init script to inject custom build logic with the -I flag to configure this task locally.

For example:

To enable HTTPS for use with ./gradlew run, an extraConfigFile is needed to be added to the cluster configuration.
To use a custom certificate for HTTPS with `./gradlew run`, you can do the following.
Create a file (for example ~/custom-run.gradle) with the following contents:
-------------------------------------
rootProject {
Expand Down
27 changes: 27 additions & 0 deletions build-tools-internal/src/main/resources/run.ssl/private-ca.key
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAvdQ8UlHoUA8SnP4MtP96p9Iv0z2qTPJr+xPYtn5ZSfQQhX0t
pEfGFqnLxeYjLGM8jjaGUZGkDaWEAtFSytm9azmnBp+smrDfoG6vNj13qu051UFi
uEbqSYNDEDU65KvphWOFQUaGU0CyX0WK6+jTiC5JCO/baBgH+hxdACYDa/fb7ydd
cubHj0JvGbfWpg441JReU/QF7w9fQcp25ea9aNCMXmg+NRfSY7LAFFue9H+LGvOI
20gkQlDtI3DJqeTz0rzJaF44colZUAULhoQyb0jyhy0OWX1iYlMaIrAPoEORoOJ8
Z/7PH7loWY8eiiWRu4EHLU06u1vysqwMb+Ij6QIDAQABAoIBAFUq8Stv4z50HMJB
+zqDuyirVVi9vHgMdeTuuRbbpjzXW0hA6ubfauEFKk8uW06RcXxOu0HCiauzvIA1
ISOwwFrowWbn4d1/iL2mm0bHGjceewmSbfPGoVv9H+wYLcWl2b5GceVg+mhEySKU
hWkliy54sbzoPHS9/2o4KoOkCnn4Mpx+xK9J4mPb1nr7PyS7j2swrGESgtjJKleW
r8n8R0HrJvs3xF6CzteLDiDPy92eK3SWkk1xZcAzmJWFhawsTzKemY91+e1bXFsP
eOw2zrL5i/unTRRIWigAQORLxwqcqIkyeXDh8BIZBi2+9DGIlKgDLaZJ1UucR7ce
zaccfa8CgYEAzF5DNQ2LooAWviK9Z5wWY5z9ekaZNIPxwVw8GOp5BZ9O0mDoMgiC
G9TPx5QdZCQexg4iWGQsWdMsyIgFVJmWgiexjMMycSvcCj2SCi7aeTJ0aitdSoyq
jgPEn9uaN08fl3M65usCJyzPh6pgBG8BFyR5jRL5Y4zfR4LHFXZRYxsCgYEA7cmi
9T/ZGo2FX5iL0TelCr4AyoS0JH0HcvVqJ1ofqHz6VVy84y+m/OzJWbocc5gEOm/U
uryP/5Tv7HDYJHvcliqK+NJKtqF8sLK5LenYuMPX4XITapAJJk16yfjBnaq8oWVG
kjdCEHKcNrkx+/2TznnLXcJhJcrm6R8K+8FlAUsCgYB+GbOyapc8P3jI/TqNUbxm
3plw91rVEoz7WGQko5jlJTVHjk/3f1R4w8kpRnUUM01hu5rpm3XaPvklCvjvCI3b
5Y4iYtcfCYcOMouICPz5R26ZjARWWZFra1vJn4D6m7HMi2dO0LdVYMr01OXGFpA/
rVvq9kg3atbikwkwbv8s/QKBgHVOcvkATY9e37xAWkGVbPM2ttcxzljt4V3iGkNd
n56UQT8ZaAm/+WZvPgno2Z5hETzu7IhO+87/X7lKFicxf6oJRNPpkng0hHn7QYWY
BpVn8DlE+LUqZ4kg0gGPmZy5nSMV/lGltw68K7qHdFQ3TdKfnScc/KYTSgUZjmaS
isyvAoGAbZVftomScxIQgHOCVMzPh9G3pUNCr3hakQRXzRaHWI7Z4ULUmlCU6wC+
/2G3/nVNFy8w2TbASK0T0vXbCLJAyLpYicdPkKaZ20jvhzMOdGvPY/CtU5i/BDsX
RgLbHCWVy/1Fd08o1Z39uDbItIqxCLzahbF4UijxtOFmQtbCc1M=
-----END RSA PRIVATE KEY-----
Binary file not shown.
Binary file not shown.
20 changes: 20 additions & 0 deletions build-tools-internal/src/main/resources/run.ssl/public-ca.pem
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDSTCCAjGgAwIBAgIUUWUwcrChxm++t6WUAImb+lZyGGYwDQYJKoZIhvcNAQEL
BQAwNDEyMDAGA1UEAxMpRWxhc3RpYyBDZXJ0aWZpY2F0ZSBUb29sIEF1dG9nZW5l
cmF0ZWQgQ0EwHhcNMjIwOTEzMjIzMzUxWhcNNDIwOTEzMjIzMzUxWjA0MTIwMAYD
VQQDEylFbGFzdGljIENlcnRpZmljYXRlIFRvb2wgQXV0b2dlbmVyYXRlZCBDQTCC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL3UPFJR6FAPEpz+DLT/eqfS
L9M9qkzya/sT2LZ+WUn0EIV9LaRHxhapy8XmIyxjPI42hlGRpA2lhALRUsrZvWs5
pwafrJqw36BurzY9d6rtOdVBYrhG6kmDQxA1OuSr6YVjhUFGhlNAsl9Fiuvo04gu
SQjv22gYB/ocXQAmA2v32+8nXXLmx49Cbxm31qYOONSUXlP0Be8PX0HKduXmvWjQ
jF5oPjUX0mOywBRbnvR/ixrziNtIJEJQ7SNwyank89K8yWheOHKJWVAFC4aEMm9I
8octDll9YmJTGiKwD6BDkaDifGf+zx+5aFmPHoolkbuBBy1NOrtb8rKsDG/iI+kC
AwEAAaNTMFEwHQYDVR0OBBYEFMQM/59PJi8sWLRHnE9WKbGJ1df1MB8GA1UdIwQY
MBaAFMQM/59PJi8sWLRHnE9WKbGJ1df1MA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZI
hvcNAQELBQADggEBAAmLsK2N/H1E3aBgHiitJog0+YMmyERI4q3Q5p3ZfIxATUUN
uxA0Q25rlwtU6VkxvQwjrSZ01CvKDdCdPV8X8TVu8xm6+3JXXmubVnejGlhIjwzW
Z9fRz4R8l9/EjoMYG2igdcc9y9kqkzsDQ4TLcn3EVkkiAENrO2flKEQOY4SXg7cN
v0I/tyIoyPT0qCr/f47BPIq0kQmfhghd5vPmiiBIQtK303/0y3/AUtLGGZLGIGNF
KAxLPoN4ly7VXQ0RZ5PDy6r0EJt5Ega8LDXPMELKN52ubxkHDE47Laq6kuRRqKxF
yXgEpxZV0K8fXvp4UXoCLonbYSQRky0GgUOvqso=
-----END CERTIFICATE-----
54 changes: 54 additions & 0 deletions build-tools-internal/src/main/resources/run.ssl/readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
This directory contains public and private keys for use with the manual run tasks.

All certs expire September 13, 2042,
have no required password,
require the hostname to be localhost or esX (where X=the name in the certificate) for hostname verification,
were created by the elasticsearch-certutil tool.

## Usage

Use public-ca.pem as the `xpack.security.transport.ssl.certificate_authorities`
Use public-ca.pem for third party tools to trust the certificates in use.
Use private-certX.p12 for node X's `xpack.security.[transport|http].ssl.keystore`

## Certificate Authority (CA):

* private-ca.key : the private key of the signing CA in PEM format. Useful if desired to sign additional certificates.
* public-ca.pem : the public key of the signing CA in PEM format. Useful as the certificate_authorities.

To recreate CA :
```bash
bin/elasticsearch-certutil ca -pem -days 7305
unzip elastic-stack-ca.zip
mv ca/ca.crt public-ca.pem
mv ca/ca.key private-ca.key
````

## Node Certificates signed by the CA

* private-certX.p12 : the public/private key of the certificate signed by the CA. Useful as the keystore.

To create new certificates signed by CA:
```bash
export i=1 # update this
rm -rf certificate-bundle.zip public-cert$i.pem private-cert$i.key private-cert$i.p12 instance
bin/elasticsearch-certutil cert -ca-key private-ca.key -ca-cert public-ca.pem -days 7305 -pem -dns localhost,es$i -ip 127.0.0.1,::1
unzip certificate-bundle.zip
mv instance/instance.crt public-cert$i.pem
mv instance/instance.key private-cert$i.key
openssl pkcs12 -export -out private-cert$i.p12 -inkey private-cert$i.key -in public-cert$i.pem -passout pass: #convert public/private key to p12
```

Other useful commands
```bash
openssl rsa -in private-ca.key -check # check private key
openssl pkcs12 -info -in private-cert$i.p12 -nodes -nocerts # read private keys from p12
openssl pkcs12 -info -in private-cert$i.p12 -nodes -nokeys # read public keys from p12
openssl x509 -in public-cert$i.pem -text # decode PEM formatted public key
openssl s_client -showcerts -connect localhost:9200 </dev/null # show cert from URL
```





2 changes: 1 addition & 1 deletion build-tools-internal/version.properties
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
elasticsearch = 8.6.0
lucene = 9.4.0-snapshot-923a9f800ae
lucene = 9.4.0-snapshot-f5d0646daa5

bundled_jdk_vendor = openjdk
bundled_jdk = 18.0.2.1+1@db379da656dc47308e138f21b33976fa
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1584,6 +1584,11 @@ public List<?> getSettings() {
return settings.getNormalizedCollection();
}

@Internal
Set<String> getSettingKeys() {
return settings.keySet();
}

@Nested
public List<?> getSystemProperties() {
return systemProperties.getNormalizedCollection();
Expand Down Expand Up @@ -1675,7 +1680,8 @@ void configureHttpWait(WaitForHttpResource wait) {
if (settings.containsKey("xpack.security.http.ssl.certificate")) {
wait.setCertificateAuthorities(getConfigDir().resolve(settings.get("xpack.security.http.ssl.certificate").toString()).toFile());
}
if (settings.containsKey("xpack.security.http.ssl.keystore.path")) {
if (settings.containsKey("xpack.security.http.ssl.keystore.path")
&& settings.containsKey("xpack.security.http.ssl.certificate_authorities") == false) { // Can not set both trust stores and CA
wait.setTrustStoreFile(getConfigDir().resolve(settings.get("xpack.security.http.ssl.keystore.path").toString()).toFile());
}
if (keystoreSettings.containsKey("xpack.security.http.ssl.keystore.secure_password")) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,21 +17,27 @@

import java.io.BufferedReader;
import java.io.Closeable;
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.BooleanSupplier;
import java.util.function.Function;
import java.util.stream.Collectors;

public class RunTask extends DefaultTestClustersTask {

private static final Logger logger = Logging.getLogger(RunTask.class);
public static final String CUSTOM_SETTINGS_PREFIX = "tests.es.";
private static final Logger logger = Logging.getLogger(RunTask.class);
private static final String tlsCertificateAuthority = "public-ca.pem";
private static final String httpsCertificate = "private-cert1.p12";
private static final String transportCertificate = "private-cert2.p12";

private Boolean debug = false;

Expand All @@ -41,6 +47,12 @@ public class RunTask extends DefaultTestClustersTask {

private String keystorePassword = "";

private Boolean useHttps = false;

private final Path tlsBasePath = Path.of(
new File(getProject().getProjectDir(), "build-tools-internal/src/main/resources/run.ssl").toURI()
);

@Option(option = "debug-jvm", description = "Enable debugging configuration, to allow attaching a debugger to elasticsearch.")
public void setDebug(boolean enabled) {
this.debug = enabled;
Expand Down Expand Up @@ -86,6 +98,17 @@ public String getDataDir() {
return dataDir.toString();
}

@Option(option = "https", description = "Helper option to enable HTTPS")
public void setUseHttps(boolean useHttps) {
this.useHttps = useHttps;
}

@Input
@Optional
public Boolean getUseHttps() {
return useHttps;
}

@Override
public void beforeStart() {
int httpPort = 9200;
Expand Down Expand Up @@ -120,6 +143,22 @@ public void beforeStart() {
if (keystorePassword.length() > 0) {
node.keystorePassword(keystorePassword);
}
if (useHttps) {
validateHelperOption("--https", "xpack.security.http.ssl", node);
node.setting("xpack.security.http.ssl.enabled", "true");
node.extraConfigFile("https.keystore", tlsBasePath.resolve(httpsCertificate).toFile());
node.extraConfigFile("https.ca", tlsBasePath.resolve(tlsCertificateAuthority).toFile());
node.setting("xpack.security.http.ssl.keystore.path", "https.keystore");
node.setting("xpack.security.http.ssl.certificate_authorities", "https.ca");
}
if (findConfiguredSettingsByPrefix("xpack.security.transport.ssl", node).isEmpty()) {
node.setting("xpack.security.transport.ssl.enabled", "true");
node.setting("xpack.security.transport.ssl.client_authentication", "required");
node.extraConfigFile("transport.keystore", tlsBasePath.resolve(transportCertificate).toFile());
node.extraConfigFile("transport.ca", tlsBasePath.resolve(tlsCertificateAuthority).toFile());
node.setting("xpack.security.transport.ssl.keystore.path", "transport.keystore");
node.setting("xpack.security.transport.ssl.certificate_authorities", "transport.ca");
}
}
}
if (debug) {
Expand Down Expand Up @@ -192,4 +231,23 @@ public void runAndWait() throws IOException {
}
}
}

/**
* Disallow overlap between helper options and explicit configuration
*/
private void validateHelperOption(String option, String prefix, ElasticsearchNode node) {
Set<String> preConfigured = findConfiguredSettingsByPrefix(prefix, node);
if (preConfigured.isEmpty() == false) {
throw new IllegalArgumentException("Can not use " + option + " with " + String.join(",", preConfigured));
}
}

/**
* Find any settings configured with a given prefix
*/
private Set<String> findConfiguredSettingsByPrefix(String prefix, ElasticsearchNode node) {
Set<String> preConfigured = new HashSet<>();
node.getSettingKeys().stream().filter(key -> key.startsWith(prefix)).forEach(k -> preConfigured.add(prefix));
return preConfigured;
}
}
5 changes: 5 additions & 0 deletions docs/changelog/89238.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 89238
summary: "ILM: Get policy support wildcard name"
area: ILM+SLM
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/90038.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 90038
summary: "Synthetic `_source`: `ignore_malformed` for `ip`"
area: TSDB
type: enhancement
issues: []
32 changes: 31 additions & 1 deletion docs/changelog/90116.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,34 @@ pr: 90116
summary: "Release time-series (TSDB) functionality"
area: "TSDB"
type: feature
issues: []
issues:
- 74660
highlight:
title: Release time series data stream (TSDS) functionality
body: |-
Elasticsearch offers support for time series data stream (TSDS) indices. A TSDS index is
an index that contains time series metrics data as part of a data stream. Elasticsearch
routes the incoming documents into a TSDS index so
that all the documents for a particular time series are on the same shard, and
then sorts the shard by time series and timestamp. This structure
has a few advantages:
1. Documents from the same time series are next to each other on the shard, and
hence stored next to each other on the disk, so the operating system
pages are much more homogeneous and compress better, yielding massive reduction
in TCO.
2. The analysis of a time series typically involves comparing each two consecutive
docs (samples), examining the last doc in a given time window, etc., which is quite
complex when the next doc could be on any shard, and in fact on any index. Sorting
by time series and timestamp allows improved analysis, both in terms of performance
and in terms of our ability to add new aggregations.
Finally, as part of the Index Lifecycle Management of metrics data time series,
Elasticsearch enables a Downsampling action. When an index is downsampled,
Elasticsearch keeps a single document with statistical summaries per each bucket
of time in the time series. Supported aggregations can then be run on the data
stream and include both downsampled indices and raw data indices, without the
user needing to be aware of that. Downsampling of downsampled indices, to more
coarse time resolution, is also supported.
notable: true
5 changes: 5 additions & 0 deletions docs/changelog/90200.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 90200
summary: Add profiling information for knn vector queries
area: Vector Search
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/90290.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 90290
summary: "Fixed: aggregate_metric_double multi values exception"
area: Aggregations
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/90295.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 90295
summary: "Add validations for the downsampling ILM action"
area: "ILM+SLM"
type: enhancement
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/90303.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 90303
summary: Use `IndexOrDocValues` query for IP range queries
area: Search
type: enhancement
issues:
- 83658
6 changes: 6 additions & 0 deletions docs/changelog/90317.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 90317
summary: "Aggs: fix `auto_date_histogram` > `ip_range`"
area: Aggregations
type: bug
issues:
- 90121
9 changes: 9 additions & 0 deletions docs/changelog/90347.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
pr: 90347
summary: Fix NPE in transform scheduling
area: Transform
type: bug
issues:
- 90356
- 88203
- 90301
- 90255
6 changes: 6 additions & 0 deletions docs/changelog/90365.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 90365
summary: Update min version for health node reporting to 8.5
area: Health
type: bug
issues:
- 90359
17 changes: 10 additions & 7 deletions docs/reference/indices/forcemerge.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -39,13 +39,16 @@ deleted documents. Merging normally happens automatically, but sometimes it is
useful to trigger a merge manually.

// tag::force-merge-read-only-warn[]
WARNING: **Force merge should only be called against an index after you have
finished writing to it.** Force merge can cause very large (>5GB) segments to
be produced, and if you continue to write to such an index then the automatic
merge policy will never consider these segments for future merges until they
mostly consist of deleted documents. This can cause very large segments to
remain in the index which can result in increased disk usage and worse search
performance.
WARNING: **We recommend only force merging a read-only index (meaning the index
is no longer receiving writes).** When documents are updated or deleted, the
old version is not immediately removed, but instead soft-deleted and marked
with a "tombstone". These soft-deleted documents are automatically cleaned up
during regular segment merges. But force merge can cause very large (> 5GB)
segments to be produced, which are not eligible for regular merges. So the
number of soft-deleted documents can then grow rapidly, resulting in higher
disk usage and worse search performance. If you regularly force merge an index
receiving writes, this can also make snapshots more expensive, since the new
documents can't be backed up incrementally.
// end::force-merge-read-only-warn[]


Expand Down
Loading

0 comments on commit f44c1b1

Please sign in to comment.