From 92e9f220d61d1e0db48b461dc4364252be50023c Mon Sep 17 00:00:00 2001 From: Derek Ho Date: Fri, 15 Dec 2023 11:38:04 -0500 Subject: [PATCH 1/8] Update some admin:admin references Signed-off-by: Derek Ho --- .../install-opensearch/debian.md | 8 ++++---- .../install-opensearch/rpm.md | 12 +++++------ .../install-opensearch/tar.md | 10 ++++++++-- .../install-opensearch/windows.md | 10 ++++++++-- _monitoring-your-cluster/pa/index.md | 2 +- _observing-your-data/log-ingestion.md | 2 +- _observing-your-data/trace/getting-started.md | 2 +- _reporting/rep-cli-env-var.md | 2 +- _search-plugins/sql/sql/index.md | 2 +- .../access-control/cross-cluster-search.md | 20 ++++++++++--------- _security/access-control/impersonation.md | 2 +- _troubleshoot/index.md | 2 +- .../remote-store/index.md | 2 +- _tuning-your-cluster/index.md | 2 +- .../replication-plugin/auto-follow.md | 10 +++++----- .../replication-plugin/getting-started.md | 2 +- _upgrade-to/upgrade-to.md | 4 ++-- 17 files changed, 54 insertions(+), 40 deletions(-) diff --git a/_install-and-configure/install-opensearch/debian.md b/_install-and-configure/install-opensearch/debian.md index 77b0473c71..161a9bc560 100644 --- a/_install-and-configure/install-opensearch/debian.md +++ b/_install-and-configure/install-opensearch/debian.md @@ -40,10 +40,10 @@ This guide assumes that you are comfortable working from the Linux command line 1. From the CLI, install using `dpkg`. ```bash # x64 - sudo dpkg -i opensearch-{{site.opensearch_version}}-linux-x64.deb + sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password > dpkg -i opensearch-{{site.opensearch_version}}-linux-x64.deb # arm64 - sudo dpkg -i opensearch-{{site.opensearch_version}}-linux-arm64.deb + sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password> dpkg -i opensearch-{{site.opensearch_version}}-linux-arm64.deb ``` 1. After the installation succeeds, enable OpenSearch as a service. @@ -175,7 +175,7 @@ An OpenSearch node in its default configuration (with demo certificates and user 1. Send requests to the server to verify that OpenSearch is running. Note the use of the `--insecure` flag, which is required because the TLS certificates are self-signed. - Send a request to port 9200: ```bash - curl -X GET https://localhost:9200 -u 'admin:admin' --insecure + curl -X GET https://localhost:9200 -u 'admin:< Admin password >' --insecure ``` {% include copy.html %} @@ -201,7 +201,7 @@ An OpenSearch node in its default configuration (with demo certificates and user ``` - Query the plugins endpoint: ```bash - curl -X GET https://localhost:9200/_cat/plugins?v -u 'admin:admin' --insecure + curl -X GET https://localhost:9200/_cat/plugins?v -u 'admin:< Admin password >' --insecure ``` {% include copy.html %} diff --git a/_install-and-configure/install-opensearch/rpm.md b/_install-and-configure/install-opensearch/rpm.md index 7880e44d32..ae5c806262 100644 --- a/_install-and-configure/install-opensearch/rpm.md +++ b/_install-and-configure/install-opensearch/rpm.md @@ -46,16 +46,16 @@ This guide assumes that you are comfortable working from the Linux command line 1. From the CLI, you can install the package with `rpm` or `yum`. ```bash # Install the x64 package using yum. - sudo yum install opensearch-{{site.opensearch_version}}-linux-x64.rpm + sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password > yum install opensearch-{{site.opensearch_version}}-linux-x64.rpm # Install the x64 package using rpm. - sudo rpm -ivh opensearch-{{site.opensearch_version}}-linux-x64.rpm + sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password > rpm -ivh opensearch-{{site.opensearch_version}}-linux-x64.rpm # Install the arm64 package using yum. - sudo yum install opensearch-{{site.opensearch_version}}-linux-x64.rpm + sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password > yum install opensearch-{{site.opensearch_version}}-linux-x64.rpm # Install the arm64 package using rpm. - sudo rpm -ivh opensearch-{{site.opensearch_version}}-linux-x64.rpm + sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password > rpm -ivh opensearch-{{site.opensearch_version}}-linux-x64.rpm ``` 1. After the installation succeeds, enable OpenSearch as a service. ```bash @@ -147,7 +147,7 @@ An OpenSearch node in its default configuration (with demo certificates and user 1. Send requests to the server to verify that OpenSearch is running. Note the use of the `--insecure` flag, which is required because the TLS certificates are self-signed. - Send a request to port 9200: ```bash - curl -X GET https://localhost:9200 -u 'admin:admin' --insecure + curl -X GET https://localhost:9200 -u 'admin:< Admin password >' --insecure ``` {% include copy.html %} @@ -173,7 +173,7 @@ An OpenSearch node in its default configuration (with demo certificates and user ``` - Query the plugins endpoint: ```bash - curl -X GET https://localhost:9200/_cat/plugins?v -u 'admin:admin' --insecure + curl -X GET https://localhost:9200/_cat/plugins?v -u 'admin:< Admin password >' --insecure ``` {% include copy.html %} diff --git a/_install-and-configure/install-opensearch/tar.md b/_install-and-configure/install-opensearch/tar.md index c6edb51491..42832e51cd 100644 --- a/_install-and-configure/install-opensearch/tar.md +++ b/_install-and-configure/install-opensearch/tar.md @@ -94,6 +94,12 @@ An OpenSearch node configured by the demo security script is not suitable for a ``` {% include copy.html %} +1. Set an initial admin password variable through the `OPENSEARCH_INITIAL_ADMIN_PASSWORD` environment variable. + ```bash + export OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password > + ``` + {% include copy.html %} + 1. Run the OpenSearch startup script with the security demo configuration. ```bash ./opensearch-tar-install.sh @@ -103,7 +109,7 @@ An OpenSearch node configured by the demo security script is not suitable for a 1. Open another terminal session and send requests to the server to verify that OpenSearch is running. Note the use of the `--insecure` flag, which is required because the TLS certificates are self-signed. - Send a request to port 9200: ```bash - curl -X GET https://localhost:9200 -u 'admin:admin' --insecure + curl -X GET https://localhost:9200 -u 'admin:< Admin password >' --insecure ``` {% include copy.html %} @@ -129,7 +135,7 @@ An OpenSearch node configured by the demo security script is not suitable for a ``` - Query the plugins endpoint: ```bash - curl -X GET https://localhost:9200/_cat/plugins?v -u 'admin:admin' --insecure + curl -X GET https://localhost:9200/_cat/plugins?v -u 'admin:< Admin password >' --insecure ``` {% include copy.html %} diff --git a/_install-and-configure/install-opensearch/windows.md b/_install-and-configure/install-opensearch/windows.md index b945c0e049..7223333d6a 100644 --- a/_install-and-configure/install-opensearch/windows.md +++ b/_install-and-configure/install-opensearch/windows.md @@ -65,6 +65,12 @@ An OpenSearch node in its default configuration (with demo certificates and user ``` {% include copy.html %} + 1. Set the initial admin password via the `OPENSEARCH_INITIAL_ADMIN_PASSWORD` environment variable. + ```bat + set OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password > + ``` + {% include copy.html %} + 1. Run the batch script. ```bat .\opensearch-windows-install.bat @@ -74,7 +80,7 @@ An OpenSearch node in its default configuration (with demo certificates and user 1. Open a new command prompt and send requests to the server to verify that OpenSearch is running. Note the use of the `--insecure` flag, which is required because the TLS certificates are self-signed. - Send a request to port 9200: ```bat - curl.exe -X GET https://localhost:9200 -u "admin:admin" --insecure + curl.exe -X GET https://localhost:9200 -u "admin:< Admin password >" --insecure ``` {% include copy.html %} @@ -100,7 +106,7 @@ An OpenSearch node in its default configuration (with demo certificates and user ``` - Query the plugins endpoint: ```bat - curl.exe -X GET https://localhost:9200/_cat/plugins?v -u "admin:admin" --insecure + curl.exe -X GET https://localhost:9200/_cat/plugins?v -u "admin:< Admin password >" --insecure ``` {% include copy.html %} diff --git a/_monitoring-your-cluster/pa/index.md b/_monitoring-your-cluster/pa/index.md index e88831ba4e..4d481a1f70 100644 --- a/_monitoring-your-cluster/pa/index.md +++ b/_monitoring-your-cluster/pa/index.md @@ -245,7 +245,7 @@ curl -XPOST http://localhost:9200/_plugins/_performanceanalyzer/rca/cluster/conf If you encounter the `curl: (52) Empty reply from server` response, run the following command to enable RCA: ```bash -curl -XPOST https://localhost:9200/_plugins/_performanceanalyzer/rca/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}' -u 'admin:admin' -k +curl -XPOST https://localhost:9200/_plugins/_performanceanalyzer/rca/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}' -u 'admin:< Admin password >' -k ``` ### Example API query and response diff --git a/_observing-your-data/log-ingestion.md b/_observing-your-data/log-ingestion.md index 751f538c3c..2644da0594 100644 --- a/_observing-your-data/log-ingestion.md +++ b/_observing-your-data/log-ingestion.md @@ -63,7 +63,7 @@ This should result in a single document being written to the OpenSearch cluster Run the following command to see one of the raw documents in the OpenSearch cluster: ```bash -curl -X GET -u 'admin:admin' -k 'https://localhost:9200/apache_logs/_search?pretty&size=1' +curl -X GET -u 'admin:< Admin password >' -k 'https://localhost:9200/apache_logs/_search?pretty&size=1' ``` The response should show the parsed log data: diff --git a/_observing-your-data/trace/getting-started.md b/_observing-your-data/trace/getting-started.md index b146b42a0e..cc2695be48 100644 --- a/_observing-your-data/trace/getting-started.md +++ b/_observing-your-data/trace/getting-started.md @@ -76,7 +76,7 @@ node-0.example.com | [2020-11-19T16:29:55,267][INFO ][o.e.c.m.MetadataMappingSe In a new terminal window, run the following command to see one of the raw documents in the OpenSearch cluster: ```bash -curl -X GET -u 'admin:admin' -k 'https://localhost:9200/otel-v1-apm-span-000001/_search?pretty&size=1' +curl -X GET -u 'admin:< Admin password >' -k 'https://localhost:9200/otel-v1-apm-span-000001/_search?pretty&size=1' ``` Navigate to `http://localhost:5601` in a web browser and choose **Trace Analytics**. You can see the results of your single click in the Jaeger HotROD web interface: the number of traces per API and HTTP method, latency trends, a color-coded map of the service architecture, and a list of trace IDs that you can use to drill down on individual operations. diff --git a/_reporting/rep-cli-env-var.md b/_reporting/rep-cli-env-var.md index a4e079501d..c12a5b5a71 100644 --- a/_reporting/rep-cli-env-var.md +++ b/_reporting/rep-cli-env-var.md @@ -30,7 +30,7 @@ Values from the command line argument have higher priority than the environment The following command requests a report with basic authentication in PNG format: ``` -opensearch-reporting-cli --url https://localhost:5601/app/dashboards#/view/7adfa750-4c81-11e8-b3d7-01146121b73d --format png --auth basic --credentials admin:admin +opensearch-reporting-cli --url https://localhost:5601/app/dashboards#/view/7adfa750-4c81-11e8-b3d7-01146121b73d --format png --auth basic --credentials admin:< Admin password > ``` Upon success, the report will download to the current directory. diff --git a/_search-plugins/sql/sql/index.md b/_search-plugins/sql/sql/index.md index 9a466902ff..c2279cbf46 100644 --- a/_search-plugins/sql/sql/index.md +++ b/_search-plugins/sql/sql/index.md @@ -61,7 +61,7 @@ POST _plugins/_sql To run the preceding query in the command line, use the [curl](https://curl.haxx.se/) command: ```bash -curl -XPOST https://localhost:9200/_plugins/_sql -u 'admin:admin' -k -H 'Content-Type: application/json' -d '{"query": "SELECT * FROM my-index* LIMIT 50"}' +curl -XPOST https://localhost:9200/_plugins/_sql -u 'admin:< Admin password >' -k -H 'Content-Type: application/json' -d '{"query": "SELECT * FROM my-index* LIMIT 50"}' ``` {% include copy.html %} diff --git a/_security/access-control/cross-cluster-search.md b/_security/access-control/cross-cluster-search.md index 182803da5d..e1f8aef879 100644 --- a/_security/access-control/cross-cluster-search.md +++ b/_security/access-control/cross-cluster-search.md @@ -77,6 +77,7 @@ services: - discovery.type=single-node - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM + - "OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password >" # The initial admin password used by the demo configuration ulimits: memlock: soft: -1 @@ -97,6 +98,7 @@ services: - discovery.type=single-node - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM + - "OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password >" # The initial admin password used by the demo configuration ulimits: memlock: soft: -1 @@ -120,13 +122,13 @@ networks: After the clusters start, verify the names of each: ```json -curl -XGET -u 'admin:admin' -k 'https://localhost:9200' +curl -XGET -u 'admin:< Admin password >' -k 'https://localhost:9200' { "cluster_name" : "opensearch-ccs-cluster1", ... } -curl -XGET -u 'admin:admin' -k 'https://localhost:9250' +curl -XGET -u 'admin:< Admin password >' -k 'https://localhost:9250' { "cluster_name" : "opensearch-ccs-cluster2", ... @@ -154,7 +156,7 @@ docker inspect --format='{% raw %}{{range .NetworkSettings.Networks}}{{.IPAddres On the coordinating cluster, add the remote cluster name and the IP address (with port 9300) for each "seed node." In this case, you only have one seed node: ```json -curl -k -XPUT -H 'Content-Type: application/json' -u 'admin:admin' 'https://localhost:9250/_cluster/settings' -d ' +curl -k -XPUT -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9250/_cluster/settings' -d ' { "persistent": { "cluster.remote": { @@ -169,13 +171,13 @@ curl -k -XPUT -H 'Content-Type: application/json' -u 'admin:admin' 'https://loca On the remote cluster, index a document: ```bash -curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:admin' 'https://localhost:9200/books/_doc/1' -d '{"Dracula": "Bram Stoker"}' +curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/books/_doc/1' -d '{"Dracula": "Bram Stoker"}' ``` At this point, cross-cluster search works. You can test it using the `admin` user: ```bash -curl -XGET -k -u 'admin:admin' 'https://localhost:9250/opensearch-ccs-cluster1:books/_search?pretty' +curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9250/opensearch-ccs-cluster1:books/_search?pretty' { ... "hits": [{ @@ -192,8 +194,8 @@ curl -XGET -k -u 'admin:admin' 'https://localhost:9250/opensearch-ccs-cluster1:b To continue testing, create a new user on both clusters: ```bash -curl -XPUT -k -u 'admin:admin' 'https://localhost:9200/_plugins/_security/api/internalusers/booksuser' -H 'Content-Type: application/json' -d '{"password":"password"}' -curl -XPUT -k -u 'admin:admin' 'https://localhost:9250/_plugins/_security/api/internalusers/booksuser' -H 'Content-Type: application/json' -d '{"password":"password"}' +curl -XPUT -k -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_security/api/internalusers/booksuser' -H 'Content-Type: application/json' -d '{"password":"password"}' +curl -XPUT -k -u 'admin:< Admin password >' 'https://localhost:9250/_plugins/_security/api/internalusers/booksuser' -H 'Content-Type: application/json' -d '{"password":"password"}' ``` Then run the same search as before with `booksuser`: @@ -218,8 +220,8 @@ curl -XGET -k -u booksuser:password 'https://localhost:9250/opensearch-ccs-clust Note the permissions error. On the remote cluster, create a role with the appropriate permissions, and map `booksuser` to that role: ```bash -curl -XPUT -k -u 'admin:admin' -H 'Content-Type: application/json' 'https://localhost:9200/_plugins/_security/api/roles/booksrole' -d '{"index_permissions":[{"index_patterns":["books"],"allowed_actions":["indices:admin/shards/search_shards","indices:data/read/search"]}]}' -curl -XPUT -k -u 'admin:admin' -H 'Content-Type: application/json' 'https://localhost:9200/_plugins/_security/api/rolesmapping/booksrole' -d '{"users" : ["booksuser"]}' +curl -XPUT -k -u 'admin:< Admin password >' -H 'Content-Type: application/json' 'https://localhost:9200/_plugins/_security/api/roles/booksrole' -d '{"index_permissions":[{"index_patterns":["books"],"allowed_actions":["indices:admin/shards/search_shards","indices:data/read/search"]}]}' +curl -XPUT -k -u 'admin:< Admin password >' -H 'Content-Type: application/json' 'https://localhost:9200/_plugins/_security/api/rolesmapping/booksrole' -d '{"users" : ["booksuser"]}' ``` Both clusters must have the user, but only the remote cluster needs the role and mapping; in this case, the coordinating cluster handles authentication (i.e. "Does this request include valid user credentials?"), and the remote cluster handles authorization (i.e. "Can this user access this data?"). diff --git a/_security/access-control/impersonation.md b/_security/access-control/impersonation.md index 3333e9824e..c9d7fbd112 100644 --- a/_security/access-control/impersonation.md +++ b/_security/access-control/impersonation.md @@ -47,5 +47,5 @@ plugins.security.authcz.impersonation_dn: To impersonate another user, submit a request to the system with the HTTP header `opendistro_security_impersonate_as` set to the name of the user to be impersonated. A good test is to make a GET request to the `_plugins/_security/authinfo` URI: ```bash -curl -XGET -u 'admin:admin' -k -H "opendistro_security_impersonate_as: user_1" https://localhost:9200/_plugins/_security/authinfo?pretty +curl -XGET -u 'admin:< Admin password >' -k -H "opendistro_security_impersonate_as: user_1" https://localhost:9200/_plugins/_security/authinfo?pretty ``` diff --git a/_troubleshoot/index.md b/_troubleshoot/index.md index f2280d0462..52240de1a0 100644 --- a/_troubleshoot/index.md +++ b/_troubleshoot/index.md @@ -28,7 +28,7 @@ If you run legacy Kibana OSS scripts against OpenSearch Dashboards---for example In this case, your scripts likely include the `"kbn-xsrf: true"` header. Switch it to the `osd-xsrf: true` header: ``` -curl -XPOST -u 'admin:admin' 'https://DASHBOARDS_ENDPOINT/api/saved_objects/_import' -H 'osd-xsrf:true' --form file=@export.ndjson +curl -XPOST -u 'admin:< Admin password >' 'https://DASHBOARDS_ENDPOINT/api/saved_objects/_import' -H 'osd-xsrf:true' --form file=@export.ndjson ``` diff --git a/_tuning-your-cluster/availability-and-recovery/remote-store/index.md b/_tuning-your-cluster/availability-and-recovery/remote-store/index.md index fc6643955a..8850874336 100644 --- a/_tuning-your-cluster/availability-and-recovery/remote-store/index.md +++ b/_tuning-your-cluster/availability-and-recovery/remote-store/index.md @@ -86,7 +86,7 @@ curl -X POST "https://localhost:9200/_remotestore/_restore" -H 'Content-Type: ap **Restore all shards of a given index** ```bash -curl -X POST "https://localhost:9200/_remotestore/_restore?restore_all_shards=true" -ku admin:admin -H 'Content-Type: application/json' -d' +curl -X POST "https://localhost:9200/_remotestore/_restore?restore_all_shards=true" -ku admin:< Admin password > -H 'Content-Type: application/json' -d' { "indices": ["my-index"] } diff --git a/_tuning-your-cluster/index.md b/_tuning-your-cluster/index.md index 67690eb441..a1ca603752 100644 --- a/_tuning-your-cluster/index.md +++ b/_tuning-your-cluster/index.md @@ -175,7 +175,7 @@ less /var/log/opensearch/opensearch-cluster.log Perform the following `_cat` query on any node to see all the nodes formed as a cluster: ```bash -curl -XGET https://:9200/_cat/nodes?v -u 'admin:admin' --insecure +curl -XGET https://:9200/_cat/nodes?v -u 'admin:< Admin password >' --insecure ``` ``` diff --git a/_tuning-your-cluster/replication-plugin/auto-follow.md b/_tuning-your-cluster/replication-plugin/auto-follow.md index fb94622727..66390303bc 100644 --- a/_tuning-your-cluster/replication-plugin/auto-follow.md +++ b/_tuning-your-cluster/replication-plugin/auto-follow.md @@ -28,7 +28,7 @@ Replication rules are a collection of patterns that you create against a single Create a replication rule on the follower cluster: ```bash -curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:admin' 'https://localhost:9200/_plugins/_replication/_autofollow?pretty' -d ' +curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/_autofollow?pretty' -d ' { "leader_alias" : "my-connection-alias", "name": "my-replication-rule", @@ -46,13 +46,13 @@ If the Security plugin is disabled, you can leave out the `use_roles` parameter. To test the rule, create a matching index on the leader cluster: ```bash -curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:admin' 'https://localhost:9201/movies-0001?pretty' +curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9201/movies-0001?pretty' ``` And confirm its replica shows up on the follower cluster: ```bash -curl -XGET -u 'admin:admin' -k 'https://localhost:9200/_cat/indices?v' +curl -XGET -u 'admin:< Admin password >' -k 'https://localhost:9200/_cat/indices?v' ``` It might take several seconds for the index to appear. @@ -67,7 +67,7 @@ yellow open movies-0001 kHOxYYHxRMeszLjTD9rvSQ 1 1 0 To retrieve a list of existing replication rules that are configured on a cluster, send the following request: ```bash -curl -XGET -u 'admin:admin' -k 'https://localhost:9200/_plugins/_replication/autofollow_stats' +curl -XGET -u 'admin:< Admin password >' -k 'https://localhost:9200/_plugins/_replication/autofollow_stats' { "num_success_start_replication": 1, @@ -96,7 +96,7 @@ curl -XGET -u 'admin:admin' -k 'https://localhost:9200/_plugins/_replication/aut To delete a replication rule, send the following request to the follower cluster: ```bash -curl -XDELETE -k -H 'Content-Type: application/json' -u 'admin:admin' 'https://localhost:9200/_plugins/_replication/_autofollow?pretty' -d ' +curl -XDELETE -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/_autofollow?pretty' -d ' { "leader_alias" : "my-conection-alias", "name": "my-replication-rule" diff --git a/_tuning-your-cluster/replication-plugin/getting-started.md b/_tuning-your-cluster/replication-plugin/getting-started.md index 2b7ea1d8e6..9b956a9047 100644 --- a/_tuning-your-cluster/replication-plugin/getting-started.md +++ b/_tuning-your-cluster/replication-plugin/getting-started.md @@ -32,7 +32,7 @@ In addition, verify and add the distinguished names (DNs) of each follower clust First, get the node's DN from each follower cluster: ```bash -curl -XGET -k -u 'admin:admin' 'https://localhost:9200/_opendistro/_security/api/ssl/certs?pretty' +curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9200/_opendistro/_security/api/ssl/certs?pretty' { "transport_certificates_list": [ diff --git a/_upgrade-to/upgrade-to.md b/_upgrade-to/upgrade-to.md index 3316072a07..462937cf5f 100644 --- a/_upgrade-to/upgrade-to.md +++ b/_upgrade-to/upgrade-to.md @@ -87,7 +87,7 @@ If you are migrating an Open Distro for Elasticsearch cluster, we recommend firs # Elasticsearch OSS curl -XGET 'localhost:9200/_nodes/_all?pretty=true' # Open Distro for Elasticsearch with Security plugin enabled - curl -XGET 'https://localhost:9200/_nodes/_all?pretty=true' -u 'admin:admin' -k + curl -XGET 'https://localhost:9200/_nodes/_all?pretty=true' -u 'admin:< Admin password >' -k ``` Specifically, check the `nodes..version` portion of the response. Also check `_cat/indices?v` for a green status on all indexes. @@ -169,7 +169,7 @@ If you are migrating an Open Distro for Elasticsearch cluster, we recommend firs # Security plugin disabled curl -XGET 'localhost:9200/_nodes/_all?pretty=true' # Security plugin enabled - curl -XGET -k -u 'admin:admin' 'https://localhost:9200/_nodes/_all?pretty=true' + curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9200/_nodes/_all?pretty=true' ``` Specifically, check the `nodes..version` portion of the response. Also check `_cat/indices?v` for a green status on all indexes. From 2c05e394648ed9ff5858b3296762f7d3ee2de431 Mon Sep 17 00:00:00 2001 From: Derek Ho Date: Fri, 15 Dec 2023 12:28:20 -0500 Subject: [PATCH 2/8] Update all references except for helm Signed-off-by: Derek Ho --- _api-reference/tasks.md | 4 +-- _benchmark/quickstart.md | 2 +- _clients/javascript/index.md | 4 +-- .../install-opensearch/docker.md | 12 ++++--- .../appendix/rolling-upgrade-lab.md | 34 +++++++++---------- .../replication-plugin/getting-started.md | 26 +++++++------- quickstart.md | 14 ++++---- 7 files changed, 49 insertions(+), 47 deletions(-) diff --git a/_api-reference/tasks.md b/_api-reference/tasks.md index 19ef373806..8ca34d15f4 100644 --- a/_api-reference/tasks.md +++ b/_api-reference/tasks.md @@ -267,7 +267,7 @@ To associate requests with tasks for better tracking, you can provide a `X-Opaqu Usage: ```bash -curl -i -H "X-Opaque-Id: 111111" "https://localhost:9200/_tasks" -u 'admin:admin' --insecure +curl -i -H "X-Opaque-Id: 111111" "https://localhost:9200/_tasks" -u 'admin:< Admin password >' --insecure ``` {% include copy.html %} @@ -326,6 +326,6 @@ content-length: 768 This operation supports the same parameters as the `tasks` operation. The following example shows how you can associate `X-Opaque-Id` with specific tasks: ```bash -curl -i -H "X-Opaque-Id: 123456" "https://localhost:9200/_tasks?nodes=opensearch-node1" -u 'admin:admin' --insecure +curl -i -H "X-Opaque-Id: 123456" "https://localhost:9200/_tasks?nodes=opensearch-node1" -u 'admin:< Admin password >' --insecure ``` {% include copy.html %} diff --git a/_benchmark/quickstart.md b/_benchmark/quickstart.md index 52415cb608..4598837e03 100644 --- a/_benchmark/quickstart.md +++ b/_benchmark/quickstart.md @@ -31,7 +31,7 @@ After installation, you can verify OpenSearch is running by going to `localhost: Use the following command to verify OpenSearch is running with SSL certificate checks disabled: ```bash -curl -k -u admin:admin https://localhost:9200 # the "-k" option skips SSL certificate checks +curl -k -u admin:< Admin password > https://localhost:9200 # the "-k" option skips SSL certificate checks { "name" : "147ddae31bf8.opensearch.org", diff --git a/_clients/javascript/index.md b/_clients/javascript/index.md index 9adcd65be7..9cafb64b6d 100644 --- a/_clients/javascript/index.md +++ b/_clients/javascript/index.md @@ -48,7 +48,7 @@ To connect to the default OpenSearch host, create a client object with the addre var host = "localhost"; var protocol = "https"; var port = 9200; -var auth = "admin:admin"; // For testing only. Don't store credentials in code. +var auth = "admin:< Admin password >"; // For testing only. Don't store credentials in code. var ca_certs_path = "/full/path/to/root-ca.pem"; // Optional client certificates if you don't want to use HTTP basic authentication. @@ -303,7 +303,7 @@ The following sample program creates a client, adds an index with non-default se var host = "localhost"; var protocol = "https"; var port = 9200; -var auth = "admin:admin"; // For testing only. Don't store credentials in code. +var auth = "admin:< Admin password >"; // For testing only. Don't store credentials in code. var ca_certs_path = "/full/path/to/root-ca.pem"; // Optional client certificates if you don't want to use HTTP basic authentication. diff --git a/_install-and-configure/install-opensearch/docker.md b/_install-and-configure/install-opensearch/docker.md index 48c9a5f215..bc6c1019f4 100644 --- a/_install-and-configure/install-opensearch/docker.md +++ b/_install-and-configure/install-opensearch/docker.md @@ -85,14 +85,14 @@ To download a specific version of OpenSearch or OpenSearch Dashboards other than Before continuing, you should verify that Docker is working correctly by deploying OpenSearch in a single container. -1. Run the following command: +1. Run the following command and pass in your own custom admin password: ```bash - # This command maps ports 9200 and 9600, sets the discovery type to "single-node" and requests the newest image of OpenSearch - docker run -d -p 9200:9200 -p 9600:9600 -e "discovery.type=single-node" opensearchproject/opensearch:latest + # This command maps ports 9200 and 9600, sets the discovery type to "single-node", sets the initial admin password according to user input and requests the newest image of OpenSearch + docker run -d -p 9200:9200 -p 9600:9600 -e "discovery.type=single-node" -e "OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password >" opensearchproject/opensearch:latest ``` -1. Send a request to port 9200. The default username and password are `admin`. +1. Send a request to port 9200. The default username is `admin` and the password is what you set for `OPENSEARCH_INITIAL_ADMIN_PASSWORD`. ```bash - curl https://localhost:9200 -ku 'admin:admin' + curl https://localhost:9200 -ku 'admin:< Admin password >' ``` {% include copy.html %} @@ -166,6 +166,7 @@ services: - cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2 # Nodes eligible to serve as cluster manager - bootstrap.memory_lock=true # Disable JVM heap memory swapping - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM + - "OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password >" # Set the password of the admin user for the demo configuration ulimits: memlock: soft: -1 # Set memlock to unlimited (no soft or hard limit) @@ -190,6 +191,7 @@ services: - cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2 - bootstrap.memory_lock=true - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" + - "OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password >" ulimits: memlock: soft: -1 diff --git a/_install-and-configure/upgrade-opensearch/appendix/rolling-upgrade-lab.md b/_install-and-configure/upgrade-opensearch/appendix/rolling-upgrade-lab.md index a32b4d2692..923d9cad30 100644 --- a/_install-and-configure/upgrade-opensearch/appendix/rolling-upgrade-lab.md +++ b/_install-and-configure/upgrade-opensearch/appendix/rolling-upgrade-lab.md @@ -125,7 +125,7 @@ After selecting a host, you can begin the lab: 1. Press `Ctrl+C` to stop following container logs and return to the command prompt. 1. Use cURL to query the OpenSearch REST API. In the following command, `os-node-01` is queried by sending the request to host port `9201`, which is mapped to port `9200` on the container: ```bash - curl -s "https://localhost:9201" -ku admin:admin + curl -s "https://localhost:9201" -ku admin:< Admin password > ``` {% include copy.html %}

Example response

@@ -177,7 +177,7 @@ This section can be broken down into two parts: curl -H "Content-Type: application/x-ndjson" \ -X PUT "https://localhost:9201/ecommerce?pretty" \ --data-binary "@ecommerce-field_mappings.json" \ - -ku admin:admin + -ku admin:< Admin password > ``` {% include copy.html %}

Example response

@@ -193,7 +193,7 @@ This section can be broken down into two parts: curl -H "Content-Type: application/x-ndjson" \ -X PUT "https://localhost:9201/ecommerce/_bulk?pretty" \ --data-binary "@ecommerce.json" \ - -ku admin:admin + -ku admin:< Admin password > ``` {% include copy.html %}

Example response (truncated)

@@ -226,7 +226,7 @@ This section can be broken down into two parts: curl -H 'Content-Type: application/json' \ -X GET "https://localhost:9201/ecommerce/_search?pretty=true&filter_path=hits.total" \ -d'{"query":{"match":{"customer_first_name":"Sonya"}}}' \ - -ku admin:admin + -ku admin:< Admin password > ``` {% include copy.html %}

Example response

@@ -271,7 +271,7 @@ In this section you will be: curl -H 'Content-Type: application/json' \ -X PUT "https://localhost:9201/_snapshot/snapshot-repo?pretty" \ -d '{"type":"fs","settings":{"location":"/usr/share/opensearch/snapshots"}}' \ - -ku admin:admin + -ku admin:< Admin password > ``` {% include copy.html %}

Example response

@@ -284,7 +284,7 @@ In this section you will be: ```bash curl -H 'Content-Type: application/json' \ -X POST "https://localhost:9201/_snapshot/snapshot-repo/_verify?timeout=0s&master_timeout=50s&pretty" \ - -ku admin:admin + -ku admin:< Admin password > ``` {% include copy.html %}

Example response

@@ -315,7 +315,7 @@ Snapshots are backups of a cluster’s indexes and state. See [Snapshots]({{site ```bash curl -H 'Content-Type: application/json' \ -X PUT "https://localhost:9201/_snapshot/snapshot-repo/cluster-snapshot-v137?wait_for_completion=true&pretty" \ - -ku admin:admin + -ku admin:< Admin password > ``` {% include copy.html %}

Example response

@@ -448,7 +448,7 @@ Some steps included in this section, like disabling shard replication and flushi curl -H 'Content-type: application/json' \ -X PUT "https://localhost:9201/_cluster/settings?pretty" \ -d'{"persistent":{"cluster.routing.allocation.enable":"primaries"}}' \ - -ku admin:admin + -ku admin:< Admin password > ``` {% include copy.html %}

Example response

@@ -469,7 +469,7 @@ Some steps included in this section, like disabling shard replication and flushi ``` 1. Perform a flush operation on the cluster to commit transaction log entries to the Lucene index: ```bash - curl -X POST "https://localhost:9201/_flush?pretty" -ku admin:admin + curl -X POST "https://localhost:9201/_flush?pretty" -ku admin:< Admin password > ``` {% include copy.html %}

Example response

@@ -514,7 +514,7 @@ Some steps included in this section, like disabling shard replication and flushi 1. **Optional**: Query the cluster to determine which node is acting as the cluster manager. You can run this command at any time during the process to see when a new cluster manager is elected: ```bash curl -s "https://localhost:9201/_cat/nodes?v&h=name,version,node.role,master" \ - -ku admin:admin | column -t + -ku admin:< Admin password > | column -t ``` {% include copy.html %}

Example response

@@ -528,7 +528,7 @@ Some steps included in this section, like disabling shard replication and flushi 1. **Optional**: Query the cluster to see how shard allocation changes as nodes are removed and replaced. You can run this command at any time during the process to see how shard statuses change: ```bash curl -s "https://localhost:9201/_cat/shards" \ - -ku admin:admin + -ku admin:< Admin password > ``` {% include copy.html %}

Example response

@@ -644,7 +644,7 @@ Some steps included in this section, like disabling shard replication and flushi 1. Confirm that your cluster is running the new version: ```bash curl -s "https://localhost:9201/_cat/nodes?v&h=name,version,node.role,master" \ - -ku admin:admin | column -t + -ku admin:< Admin password > | column -t ``` {% include copy.html %}

Example response

@@ -700,7 +700,7 @@ Some steps included in this section, like disabling shard replication and flushi curl -H 'Content-type: application/json' \ -X PUT "https://localhost:9201/_cluster/settings?pretty" \ -d'{"persistent":{"cluster.routing.allocation.enable":"all"}}' \ - -ku admin:admin + -ku admin:< Admin password > ``` {% include copy.html %}

Example response

@@ -735,7 +735,7 @@ For this cluster, post-upgrade validation steps can include verifying the follow 1. Verify the current running version of your OpenSearch nodes: ```bash curl -s "https://localhost:9201/_cat/nodes?v&h=name,version,node.role,master" \ - -ku admin:admin | column -t + -ku admin:< Admin password > | column -t ``` {% include copy.html %}

Example response

@@ -781,7 +781,7 @@ For this cluster, post-upgrade validation steps can include verifying the follow 1. Query the [Cluster health]({{site.url}}{{site.baseurl}}/api-reference/cluster-api/cluster-health/) API endpoint to see information about the health of your cluster. You should see a status of `green`, which indicates that all primary and replica shards are allocated: ```bash - curl -s "https://localhost:9201/_cluster/health?pretty" -ku admin:admin + curl -s "https://localhost:9201/_cluster/health?pretty" -ku admin:< Admin password > ``` {% include copy.html %}

Example response

@@ -808,7 +808,7 @@ For this cluster, post-upgrade validation steps can include verifying the follow ``` 1. Query the [CAT shards]({{site.url}}{{site.baseurl}}/api-reference/cat/cat-shards/) API endpoint to see how shards are allocated after the cluster is upgrade: ```bash - curl -s "https://localhost:9201/_cat/shards" -ku admin:admin + curl -s "https://localhost:9201/_cat/shards" -ku admin:< Admin password > ``` {% include copy.html %}

Example response

@@ -860,7 +860,7 @@ You need to query the ecommerce index again in order to confirm that the sample curl -H 'Content-Type: application/json' \ -X GET "https://localhost:9201/ecommerce/_search?pretty=true&filter_path=hits.total" \ -d'{"query":{"match":{"customer_first_name":"Sonya"}}}' \ - -ku admin:admin + -ku admin:< Admin password > ``` {% include copy.html %}

Example response

diff --git a/_tuning-your-cluster/replication-plugin/getting-started.md b/_tuning-your-cluster/replication-plugin/getting-started.md index 9b956a9047..57842fb038 100644 --- a/_tuning-your-cluster/replication-plugin/getting-started.md +++ b/_tuning-your-cluster/replication-plugin/getting-started.md @@ -110,13 +110,13 @@ networks: After the clusters start, verify the names of each: ```bash -curl -XGET -u 'admin:admin' -k 'https://localhost:9201' +curl -XGET -u 'admin:< Admin password >' -k 'https://localhost:9201' { "cluster_name" : "leader-cluster", ... } -curl -XGET -u 'admin:admin' -k 'https://localhost:9200' +curl -XGET -u 'admin:< Admin password >' -k 'https://localhost:9200' { "cluster_name" : "follower-cluster", ... @@ -148,7 +148,7 @@ Cross-cluster replication follows a "pull" model, so most changes occur on the f On the follower cluster, add the IP address (with port 9300) for each seed node. Because this is a single-node cluster, you only have one seed node. Provide a descriptive name for the connection, which you'll use in the request to start replication: ```bash -curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:admin' 'https://localhost:9200/_cluster/settings?pretty' -d ' +curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/_cluster/settings?pretty' -d ' { "persistent": { "cluster": { @@ -167,13 +167,13 @@ curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:admin' 'https://loca To get started, create an index called `leader-01` on the leader cluster: ```bash -curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:admin' 'https://localhost:9201/leader-01?pretty' +curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9201/leader-01?pretty' ``` Then start replication from the follower cluster. In the request body, provide the connection name and leader index that you want to replicate, along with the security roles you want to use: ```bash -curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:admin' 'https://localhost:9200/_plugins/_replication/follower-01/_start?pretty' -d ' +curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/follower-01/_start?pretty' -d ' { "leader_alias": "my-connection-alias", "leader_index": "leader-01", @@ -194,7 +194,7 @@ This command creates an identical read-only index named `follower-01` on the fol After replication starts, get the status: ```bash -curl -XGET -k -u 'admin:admin' 'https://localhost:9200/_plugins/_replication/follower-01/_status?pretty' +curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/follower-01/_status?pretty' { "status" : "SYNCING", @@ -217,13 +217,13 @@ The leader and follower checkpoint values begin as negative numbers and reflect To confirm that replication is actually happening, add a document to the leader index: ```bash -curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:admin' 'https://localhost:9201/leader-01/_doc/1?pretty' -d '{"The Shining": "Stephen King"}' +curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9201/leader-01/_doc/1?pretty' -d '{"The Shining": "Stephen King"}' ``` Then validate the replicated content on the follower index: ```bash -curl -XGET -k -u 'admin:admin' 'https://localhost:9200/follower-01/_search?pretty' +curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9200/follower-01/_search?pretty' { ... @@ -243,13 +243,13 @@ curl -XGET -k -u 'admin:admin' 'https://localhost:9200/follower-01/_search?prett You can temporarily pause replication of an index if you need to remediate issues or reduce load on the leader cluster: ```bash -curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:admin' 'https://localhost:9200/_plugins/_replication/follower-01/_pause?pretty' -d '{}' +curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/follower-01/_pause?pretty' -d '{}' ``` To confirm that replication is paused, get the status: ```bash -curl -XGET -k -u 'admin:admin' 'https://localhost:9200/_plugins/_replication/follower-01/_status?pretty' +curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/follower-01/_status?pretty' { "status" : "PAUSED", @@ -263,7 +263,7 @@ curl -XGET -k -u 'admin:admin' 'https://localhost:9200/_plugins/_replication/fol When you're done making changes, resume replication: ```bash -curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:admin' 'https://localhost:9200/_plugins/_replication/follower-01/_resume?pretty' -d '{}' +curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/follower-01/_resume?pretty' -d '{}' ``` When replication resumes, the follower index picks up any changes that were made to the leader index while replication was paused. @@ -275,7 +275,7 @@ Note that you can't resume replication after it's been paused for more than 12 h When you no longer need to replicate an index, terminate replication from the follower cluster: ```bash -curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:admin' 'https://localhost:9200/_plugins/_replication/follower-01/_stop?pretty' -d '{}' +curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/follower-01/_stop?pretty' -d '{}' ``` When you stop replication, the follower index un-follows the leader and becomes a standard index that you can write to. You can't restart replication after stopping it. @@ -283,7 +283,7 @@ When you stop replication, the follower index un-follows the leader and becomes Get the status to confirm that the index is no longer being replicated: ```bash -curl -XGET -k -u 'admin:admin' 'https://localhost:9200/_plugins/_replication/follower-01/_status?pretty' +curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/follower-01/_status?pretty' { "status" : "REPLICATION NOT IN PROGRESS" diff --git a/quickstart.md b/quickstart.md index 3c24b11592..fa9f437833 100644 --- a/quickstart.md +++ b/quickstart.md @@ -52,9 +52,9 @@ You'll need a special file, called a Compose file, that Docker Compose uses to d opensearch-node1 "./opensearch-docker…" opensearch-node1 running 0.0.0.0:9200->9200/tcp, 9300/tcp, 0.0.0.0:9600->9600/tcp, 9650/tcp opensearch-node2 "./opensearch-docker…" opensearch-node2 running 9200/tcp, 9300/tcp, 9600/tcp, 9650/tcp ``` -1. Query the OpenSearch REST API to verify that the service is running. You should use `-k` (also written as `--insecure`) to disable host name checking because the default security configuration uses demo certificates. Use `-u` to pass the default username and password (`admin:admin`). +1. Query the OpenSearch REST API to verify that the service is running. You should use `-k` (also written as `--insecure`) to disable host name checking because the default security configuration uses demo certificates. Use `-u` to pass the default username and password (`admin:< Admin password >`). ```bash - curl https://localhost:9200 -ku admin:admin + curl https://localhost:9200 -ku admin:< Admin password > ``` Sample response: ```json @@ -76,7 +76,7 @@ You'll need a special file, called a Compose file, that Docker Compose uses to d "tagline" : "The OpenSearch Project: https://opensearch.org/" } ``` -1. Explore OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the default password is `admin`. +1. Explore OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the password is what you specified in the `docker-compose.yml`: `< Admin password >`. ## Create an index and field mappings using sample data @@ -100,18 +100,18 @@ Create an index and define field mappings using a dataset provided by the OpenSe ``` 1. Define the field mappings with the mapping file. ```bash - curl -H "Content-Type: application/x-ndjson" -X PUT "https://localhost:9200/ecommerce" -ku admin:admin --data-binary "@ecommerce-field_mappings.json" + curl -H "Content-Type: application/x-ndjson" -X PUT "https://localhost:9200/ecommerce" -ku admin:< Admin password > --data-binary "@ecommerce-field_mappings.json" ``` 1. Upload the index to the bulk API. ```bash - curl -H "Content-Type: application/x-ndjson" -X PUT "https://localhost:9200/ecommerce/_bulk" -ku admin:admin --data-binary "@ecommerce.json" + curl -H "Content-Type: application/x-ndjson" -X PUT "https://localhost:9200/ecommerce/_bulk" -ku admin:< Admin password > --data-binary "@ecommerce.json" ``` 1. Query the data using the search API. The following command submits a query that will return documents where `customer_first_name` is `Sonya`. ```bash - curl -H 'Content-Type: application/json' -X GET "https://localhost:9200/ecommerce/_search?pretty=true" -ku admin:admin -d' {"query":{"match":{"customer_first_name":"Sonya"}}}' + curl -H 'Content-Type: application/json' -X GET "https://localhost:9200/ecommerce/_search?pretty=true" -ku admin:< Admin password > -d' {"query":{"match":{"customer_first_name":"Sonya"}}}' ``` Queries submitted to the OpenSearch REST API will generally return a flat JSON by default. For a human readable response body, use the query parameter `pretty=true`. For more information about `pretty` and other useful query parameters, see [Common REST parameters]({{site.url}}{{site.baseurl}}/opensearch/common-parameters/). -1. Access OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the default password is `admin`. +1. Access OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the default password is `< Admin password >`. 1. On the top menu bar, go to **Management > Dev Tools**. 1. In the left pane of the console, enter the following: ```json From d4d1551d025036bf7d2eeb914151d9d2e61473e2 Mon Sep 17 00:00:00 2001 From: Derek Ho Date: Fri, 15 Dec 2023 14:52:10 -0500 Subject: [PATCH 3/8] Update helm Signed-off-by: Derek Ho --- _install-and-configure/install-dashboards/helm.md | 2 +- _install-and-configure/install-opensearch/helm.md | 8 +++++++- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/_install-and-configure/install-dashboards/helm.md b/_install-and-configure/install-dashboards/helm.md index f9df5b137c..27d53190d1 100644 --- a/_install-and-configure/install-dashboards/helm.md +++ b/_install-and-configure/install-dashboards/helm.md @@ -34,7 +34,7 @@ Before you get started, you must first use [Helm to install OpenSearch]({{site.u Make sure that you can send requests to your OpenSearch pod: ```json -$ curl -XGET https://localhost:9200 -u 'admin:admin' --insecure +$ curl -XGET https://localhost:9200 -u 'admin:< Admin password >' --insecure { "name" : "opensearch-cluster-master-1", "cluster_name" : "opensearch-cluster", diff --git a/_install-and-configure/install-opensearch/helm.md b/_install-and-configure/install-opensearch/helm.md index b1df2da091..29cd13c335 100644 --- a/_install-and-configure/install-opensearch/helm.md +++ b/_install-and-configure/install-opensearch/helm.md @@ -91,7 +91,13 @@ You can also build the `opensearch-1.0.0.tgz` file manually: {% include copy.html %} The output shows you the specifications instantiated from the install. - To customize the deployment, pass in the values that you want to override with a custom YAML file: + To customize the deployment, pass in the values that you want to override with a custom YAML file. + Specifically, for the demo configuration to work, you need to pass in an initial admin password: + + ```yml + - name: OPENSEARCH_INITIAL_ADMIN_PASSWORD +# value: < Admin password > + ``` ```bash helm install --values=customvalues.yaml opensearch-1.0.0.tgz From 7606020fdfa8e284af7be40370aabcddfa62529e Mon Sep 17 00:00:00 2001 From: Darshit Chanpura Date: Mon, 8 Jan 2024 15:58:39 -0500 Subject: [PATCH 4/8] Reverts changes made to _install-and-configure folder Signed-off-by: Darshit Chanpura --- _install-and-configure/index.md | 3 ++ .../install-dashboards/helm.md | 2 +- .../install-opensearch/debian.md | 8 ++--- .../install-opensearch/docker.md | 12 +++---- .../install-opensearch/helm.md | 8 +---- .../install-opensearch/index.md | 7 ++++ .../install-opensearch/rpm.md | 12 +++---- .../install-opensearch/tar.md | 10 ++---- .../install-opensearch/windows.md | 10 ++---- .../appendix/rolling-upgrade-lab.md | 34 +++++++++---------- 10 files changed, 48 insertions(+), 58 deletions(-) diff --git a/_install-and-configure/index.md b/_install-and-configure/index.md index b43d9017e0..4fedacfdf1 100644 --- a/_install-and-configure/index.md +++ b/_install-and-configure/index.md @@ -5,6 +5,9 @@ nav_order: 1 has_children: false has_toc: false nav_exclude: true +permalink: /install-and-configure/ +redirect_from: + - /install-and-configure/index/ --- # Install and upgrade OpenSearch diff --git a/_install-and-configure/install-dashboards/helm.md b/_install-and-configure/install-dashboards/helm.md index 27d53190d1..f9df5b137c 100644 --- a/_install-and-configure/install-dashboards/helm.md +++ b/_install-and-configure/install-dashboards/helm.md @@ -34,7 +34,7 @@ Before you get started, you must first use [Helm to install OpenSearch]({{site.u Make sure that you can send requests to your OpenSearch pod: ```json -$ curl -XGET https://localhost:9200 -u 'admin:< Admin password >' --insecure +$ curl -XGET https://localhost:9200 -u 'admin:admin' --insecure { "name" : "opensearch-cluster-master-1", "cluster_name" : "opensearch-cluster", diff --git a/_install-and-configure/install-opensearch/debian.md b/_install-and-configure/install-opensearch/debian.md index 161a9bc560..77b0473c71 100644 --- a/_install-and-configure/install-opensearch/debian.md +++ b/_install-and-configure/install-opensearch/debian.md @@ -40,10 +40,10 @@ This guide assumes that you are comfortable working from the Linux command line 1. From the CLI, install using `dpkg`. ```bash # x64 - sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password > dpkg -i opensearch-{{site.opensearch_version}}-linux-x64.deb + sudo dpkg -i opensearch-{{site.opensearch_version}}-linux-x64.deb # arm64 - sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password> dpkg -i opensearch-{{site.opensearch_version}}-linux-arm64.deb + sudo dpkg -i opensearch-{{site.opensearch_version}}-linux-arm64.deb ``` 1. After the installation succeeds, enable OpenSearch as a service. @@ -175,7 +175,7 @@ An OpenSearch node in its default configuration (with demo certificates and user 1. Send requests to the server to verify that OpenSearch is running. Note the use of the `--insecure` flag, which is required because the TLS certificates are self-signed. - Send a request to port 9200: ```bash - curl -X GET https://localhost:9200 -u 'admin:< Admin password >' --insecure + curl -X GET https://localhost:9200 -u 'admin:admin' --insecure ``` {% include copy.html %} @@ -201,7 +201,7 @@ An OpenSearch node in its default configuration (with demo certificates and user ``` - Query the plugins endpoint: ```bash - curl -X GET https://localhost:9200/_cat/plugins?v -u 'admin:< Admin password >' --insecure + curl -X GET https://localhost:9200/_cat/plugins?v -u 'admin:admin' --insecure ``` {% include copy.html %} diff --git a/_install-and-configure/install-opensearch/docker.md b/_install-and-configure/install-opensearch/docker.md index bc6c1019f4..48c9a5f215 100644 --- a/_install-and-configure/install-opensearch/docker.md +++ b/_install-and-configure/install-opensearch/docker.md @@ -85,14 +85,14 @@ To download a specific version of OpenSearch or OpenSearch Dashboards other than Before continuing, you should verify that Docker is working correctly by deploying OpenSearch in a single container. -1. Run the following command and pass in your own custom admin password: +1. Run the following command: ```bash - # This command maps ports 9200 and 9600, sets the discovery type to "single-node", sets the initial admin password according to user input and requests the newest image of OpenSearch - docker run -d -p 9200:9200 -p 9600:9600 -e "discovery.type=single-node" -e "OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password >" opensearchproject/opensearch:latest + # This command maps ports 9200 and 9600, sets the discovery type to "single-node" and requests the newest image of OpenSearch + docker run -d -p 9200:9200 -p 9600:9600 -e "discovery.type=single-node" opensearchproject/opensearch:latest ``` -1. Send a request to port 9200. The default username is `admin` and the password is what you set for `OPENSEARCH_INITIAL_ADMIN_PASSWORD`. +1. Send a request to port 9200. The default username and password are `admin`. ```bash - curl https://localhost:9200 -ku 'admin:< Admin password >' + curl https://localhost:9200 -ku 'admin:admin' ``` {% include copy.html %} @@ -166,7 +166,6 @@ services: - cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2 # Nodes eligible to serve as cluster manager - bootstrap.memory_lock=true # Disable JVM heap memory swapping - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM - - "OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password >" # Set the password of the admin user for the demo configuration ulimits: memlock: soft: -1 # Set memlock to unlimited (no soft or hard limit) @@ -191,7 +190,6 @@ services: - cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2 - bootstrap.memory_lock=true - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" - - "OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password >" ulimits: memlock: soft: -1 diff --git a/_install-and-configure/install-opensearch/helm.md b/_install-and-configure/install-opensearch/helm.md index 29cd13c335..b1df2da091 100644 --- a/_install-and-configure/install-opensearch/helm.md +++ b/_install-and-configure/install-opensearch/helm.md @@ -91,13 +91,7 @@ You can also build the `opensearch-1.0.0.tgz` file manually: {% include copy.html %} The output shows you the specifications instantiated from the install. - To customize the deployment, pass in the values that you want to override with a custom YAML file. - Specifically, for the demo configuration to work, you need to pass in an initial admin password: - - ```yml - - name: OPENSEARCH_INITIAL_ADMIN_PASSWORD -# value: < Admin password > - ``` + To customize the deployment, pass in the values that you want to override with a custom YAML file: ```bash helm install --values=customvalues.yaml opensearch-1.0.0.tgz diff --git a/_install-and-configure/install-opensearch/index.md b/_install-and-configure/install-opensearch/index.md index 404d43ecf4..fe94fc2347 100644 --- a/_install-and-configure/install-opensearch/index.md +++ b/_install-and-configure/install-opensearch/index.md @@ -77,6 +77,13 @@ vm.max_map_count=262144 Then run `sudo sysctl -p` to reload. +For Windows workloads, you can set the `vm.max_map_count` running the following commands: + +```bash +wsl -d docker-desktop +sysctl -w vm.max_map_count=262144 +``` + The [sample docker-compose.yml]({{site.url}}{{site.baseurl}}/install-and-configure/install-opensearch/docker/#sample-docker-composeyml) file also contains several key settings: - `bootstrap.memory_lock=true` diff --git a/_install-and-configure/install-opensearch/rpm.md b/_install-and-configure/install-opensearch/rpm.md index ae5c806262..7880e44d32 100644 --- a/_install-and-configure/install-opensearch/rpm.md +++ b/_install-and-configure/install-opensearch/rpm.md @@ -46,16 +46,16 @@ This guide assumes that you are comfortable working from the Linux command line 1. From the CLI, you can install the package with `rpm` or `yum`. ```bash # Install the x64 package using yum. - sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password > yum install opensearch-{{site.opensearch_version}}-linux-x64.rpm + sudo yum install opensearch-{{site.opensearch_version}}-linux-x64.rpm # Install the x64 package using rpm. - sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password > rpm -ivh opensearch-{{site.opensearch_version}}-linux-x64.rpm + sudo rpm -ivh opensearch-{{site.opensearch_version}}-linux-x64.rpm # Install the arm64 package using yum. - sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password > yum install opensearch-{{site.opensearch_version}}-linux-x64.rpm + sudo yum install opensearch-{{site.opensearch_version}}-linux-x64.rpm # Install the arm64 package using rpm. - sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password > rpm -ivh opensearch-{{site.opensearch_version}}-linux-x64.rpm + sudo rpm -ivh opensearch-{{site.opensearch_version}}-linux-x64.rpm ``` 1. After the installation succeeds, enable OpenSearch as a service. ```bash @@ -147,7 +147,7 @@ An OpenSearch node in its default configuration (with demo certificates and user 1. Send requests to the server to verify that OpenSearch is running. Note the use of the `--insecure` flag, which is required because the TLS certificates are self-signed. - Send a request to port 9200: ```bash - curl -X GET https://localhost:9200 -u 'admin:< Admin password >' --insecure + curl -X GET https://localhost:9200 -u 'admin:admin' --insecure ``` {% include copy.html %} @@ -173,7 +173,7 @@ An OpenSearch node in its default configuration (with demo certificates and user ``` - Query the plugins endpoint: ```bash - curl -X GET https://localhost:9200/_cat/plugins?v -u 'admin:< Admin password >' --insecure + curl -X GET https://localhost:9200/_cat/plugins?v -u 'admin:admin' --insecure ``` {% include copy.html %} diff --git a/_install-and-configure/install-opensearch/tar.md b/_install-and-configure/install-opensearch/tar.md index 42832e51cd..c6edb51491 100644 --- a/_install-and-configure/install-opensearch/tar.md +++ b/_install-and-configure/install-opensearch/tar.md @@ -94,12 +94,6 @@ An OpenSearch node configured by the demo security script is not suitable for a ``` {% include copy.html %} -1. Set an initial admin password variable through the `OPENSEARCH_INITIAL_ADMIN_PASSWORD` environment variable. - ```bash - export OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password > - ``` - {% include copy.html %} - 1. Run the OpenSearch startup script with the security demo configuration. ```bash ./opensearch-tar-install.sh @@ -109,7 +103,7 @@ An OpenSearch node configured by the demo security script is not suitable for a 1. Open another terminal session and send requests to the server to verify that OpenSearch is running. Note the use of the `--insecure` flag, which is required because the TLS certificates are self-signed. - Send a request to port 9200: ```bash - curl -X GET https://localhost:9200 -u 'admin:< Admin password >' --insecure + curl -X GET https://localhost:9200 -u 'admin:admin' --insecure ``` {% include copy.html %} @@ -135,7 +129,7 @@ An OpenSearch node configured by the demo security script is not suitable for a ``` - Query the plugins endpoint: ```bash - curl -X GET https://localhost:9200/_cat/plugins?v -u 'admin:< Admin password >' --insecure + curl -X GET https://localhost:9200/_cat/plugins?v -u 'admin:admin' --insecure ``` {% include copy.html %} diff --git a/_install-and-configure/install-opensearch/windows.md b/_install-and-configure/install-opensearch/windows.md index 7223333d6a..b945c0e049 100644 --- a/_install-and-configure/install-opensearch/windows.md +++ b/_install-and-configure/install-opensearch/windows.md @@ -65,12 +65,6 @@ An OpenSearch node in its default configuration (with demo certificates and user ``` {% include copy.html %} - 1. Set the initial admin password via the `OPENSEARCH_INITIAL_ADMIN_PASSWORD` environment variable. - ```bat - set OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password > - ``` - {% include copy.html %} - 1. Run the batch script. ```bat .\opensearch-windows-install.bat @@ -80,7 +74,7 @@ An OpenSearch node in its default configuration (with demo certificates and user 1. Open a new command prompt and send requests to the server to verify that OpenSearch is running. Note the use of the `--insecure` flag, which is required because the TLS certificates are self-signed. - Send a request to port 9200: ```bat - curl.exe -X GET https://localhost:9200 -u "admin:< Admin password >" --insecure + curl.exe -X GET https://localhost:9200 -u "admin:admin" --insecure ``` {% include copy.html %} @@ -106,7 +100,7 @@ An OpenSearch node in its default configuration (with demo certificates and user ``` - Query the plugins endpoint: ```bat - curl.exe -X GET https://localhost:9200/_cat/plugins?v -u "admin:< Admin password >" --insecure + curl.exe -X GET https://localhost:9200/_cat/plugins?v -u "admin:admin" --insecure ``` {% include copy.html %} diff --git a/_install-and-configure/upgrade-opensearch/appendix/rolling-upgrade-lab.md b/_install-and-configure/upgrade-opensearch/appendix/rolling-upgrade-lab.md index 923d9cad30..a32b4d2692 100644 --- a/_install-and-configure/upgrade-opensearch/appendix/rolling-upgrade-lab.md +++ b/_install-and-configure/upgrade-opensearch/appendix/rolling-upgrade-lab.md @@ -125,7 +125,7 @@ After selecting a host, you can begin the lab: 1. Press `Ctrl+C` to stop following container logs and return to the command prompt. 1. Use cURL to query the OpenSearch REST API. In the following command, `os-node-01` is queried by sending the request to host port `9201`, which is mapped to port `9200` on the container: ```bash - curl -s "https://localhost:9201" -ku admin:< Admin password > + curl -s "https://localhost:9201" -ku admin:admin ``` {% include copy.html %}

Example response

@@ -177,7 +177,7 @@ This section can be broken down into two parts: curl -H "Content-Type: application/x-ndjson" \ -X PUT "https://localhost:9201/ecommerce?pretty" \ --data-binary "@ecommerce-field_mappings.json" \ - -ku admin:< Admin password > + -ku admin:admin ``` {% include copy.html %}

Example response

@@ -193,7 +193,7 @@ This section can be broken down into two parts: curl -H "Content-Type: application/x-ndjson" \ -X PUT "https://localhost:9201/ecommerce/_bulk?pretty" \ --data-binary "@ecommerce.json" \ - -ku admin:< Admin password > + -ku admin:admin ``` {% include copy.html %}

Example response (truncated)

@@ -226,7 +226,7 @@ This section can be broken down into two parts: curl -H 'Content-Type: application/json' \ -X GET "https://localhost:9201/ecommerce/_search?pretty=true&filter_path=hits.total" \ -d'{"query":{"match":{"customer_first_name":"Sonya"}}}' \ - -ku admin:< Admin password > + -ku admin:admin ``` {% include copy.html %}

Example response

@@ -271,7 +271,7 @@ In this section you will be: curl -H 'Content-Type: application/json' \ -X PUT "https://localhost:9201/_snapshot/snapshot-repo?pretty" \ -d '{"type":"fs","settings":{"location":"/usr/share/opensearch/snapshots"}}' \ - -ku admin:< Admin password > + -ku admin:admin ``` {% include copy.html %}

Example response

@@ -284,7 +284,7 @@ In this section you will be: ```bash curl -H 'Content-Type: application/json' \ -X POST "https://localhost:9201/_snapshot/snapshot-repo/_verify?timeout=0s&master_timeout=50s&pretty" \ - -ku admin:< Admin password > + -ku admin:admin ``` {% include copy.html %}

Example response

@@ -315,7 +315,7 @@ Snapshots are backups of a cluster’s indexes and state. See [Snapshots]({{site ```bash curl -H 'Content-Type: application/json' \ -X PUT "https://localhost:9201/_snapshot/snapshot-repo/cluster-snapshot-v137?wait_for_completion=true&pretty" \ - -ku admin:< Admin password > + -ku admin:admin ``` {% include copy.html %}

Example response

@@ -448,7 +448,7 @@ Some steps included in this section, like disabling shard replication and flushi curl -H 'Content-type: application/json' \ -X PUT "https://localhost:9201/_cluster/settings?pretty" \ -d'{"persistent":{"cluster.routing.allocation.enable":"primaries"}}' \ - -ku admin:< Admin password > + -ku admin:admin ``` {% include copy.html %}

Example response

@@ -469,7 +469,7 @@ Some steps included in this section, like disabling shard replication and flushi ``` 1. Perform a flush operation on the cluster to commit transaction log entries to the Lucene index: ```bash - curl -X POST "https://localhost:9201/_flush?pretty" -ku admin:< Admin password > + curl -X POST "https://localhost:9201/_flush?pretty" -ku admin:admin ``` {% include copy.html %}

Example response

@@ -514,7 +514,7 @@ Some steps included in this section, like disabling shard replication and flushi 1. **Optional**: Query the cluster to determine which node is acting as the cluster manager. You can run this command at any time during the process to see when a new cluster manager is elected: ```bash curl -s "https://localhost:9201/_cat/nodes?v&h=name,version,node.role,master" \ - -ku admin:< Admin password > | column -t + -ku admin:admin | column -t ``` {% include copy.html %}

Example response

@@ -528,7 +528,7 @@ Some steps included in this section, like disabling shard replication and flushi 1. **Optional**: Query the cluster to see how shard allocation changes as nodes are removed and replaced. You can run this command at any time during the process to see how shard statuses change: ```bash curl -s "https://localhost:9201/_cat/shards" \ - -ku admin:< Admin password > + -ku admin:admin ``` {% include copy.html %}

Example response

@@ -644,7 +644,7 @@ Some steps included in this section, like disabling shard replication and flushi 1. Confirm that your cluster is running the new version: ```bash curl -s "https://localhost:9201/_cat/nodes?v&h=name,version,node.role,master" \ - -ku admin:< Admin password > | column -t + -ku admin:admin | column -t ``` {% include copy.html %}

Example response

@@ -700,7 +700,7 @@ Some steps included in this section, like disabling shard replication and flushi curl -H 'Content-type: application/json' \ -X PUT "https://localhost:9201/_cluster/settings?pretty" \ -d'{"persistent":{"cluster.routing.allocation.enable":"all"}}' \ - -ku admin:< Admin password > + -ku admin:admin ``` {% include copy.html %}

Example response

@@ -735,7 +735,7 @@ For this cluster, post-upgrade validation steps can include verifying the follow 1. Verify the current running version of your OpenSearch nodes: ```bash curl -s "https://localhost:9201/_cat/nodes?v&h=name,version,node.role,master" \ - -ku admin:< Admin password > | column -t + -ku admin:admin | column -t ``` {% include copy.html %}

Example response

@@ -781,7 +781,7 @@ For this cluster, post-upgrade validation steps can include verifying the follow 1. Query the [Cluster health]({{site.url}}{{site.baseurl}}/api-reference/cluster-api/cluster-health/) API endpoint to see information about the health of your cluster. You should see a status of `green`, which indicates that all primary and replica shards are allocated: ```bash - curl -s "https://localhost:9201/_cluster/health?pretty" -ku admin:< Admin password > + curl -s "https://localhost:9201/_cluster/health?pretty" -ku admin:admin ``` {% include copy.html %}

Example response

@@ -808,7 +808,7 @@ For this cluster, post-upgrade validation steps can include verifying the follow ``` 1. Query the [CAT shards]({{site.url}}{{site.baseurl}}/api-reference/cat/cat-shards/) API endpoint to see how shards are allocated after the cluster is upgrade: ```bash - curl -s "https://localhost:9201/_cat/shards" -ku admin:< Admin password > + curl -s "https://localhost:9201/_cat/shards" -ku admin:admin ``` {% include copy.html %}

Example response

@@ -860,7 +860,7 @@ You need to query the ecommerce index again in order to confirm that the sample curl -H 'Content-Type: application/json' \ -X GET "https://localhost:9201/ecommerce/_search?pretty=true&filter_path=hits.total" \ -d'{"query":{"match":{"customer_first_name":"Sonya"}}}' \ - -ku admin:< Admin password > + -ku admin:admin ``` {% include copy.html %}

Example response

From cdeb90ffae8be7a3ae2f9de267abc08879dd44cd Mon Sep 17 00:00:00 2001 From: Derek Ho Date: Fri, 12 Jan 2024 11:58:54 -0500 Subject: [PATCH 5/8] Apply suggestions from code review Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Derek Ho --- _about/quickstart.md | 14 +++++----- _api-reference/tasks.md | 4 +-- _benchmark/quickstart.md | 2 +- _clients/javascript/index.md | 4 +-- _monitoring-your-cluster/pa/index.md | 2 +- _observing-your-data/log-ingestion.md | 2 +- _observing-your-data/trace/getting-started.md | 2 +- _reporting/rep-cli-env-var.md | 2 +- _search-plugins/sql/sql/index.md | 2 +- .../access-control/cross-cluster-search.md | 22 +++++++-------- _security/access-control/impersonation.md | 2 +- _troubleshoot/index.md | 2 +- .../remote-store/index.md | 2 +- _tuning-your-cluster/index.md | 2 +- .../replication-plugin/auto-follow.md | 10 +++---- .../replication-plugin/getting-started.md | 28 +++++++++---------- _upgrade-to/upgrade-to.md | 4 +-- 17 files changed, 53 insertions(+), 53 deletions(-) diff --git a/_about/quickstart.md b/_about/quickstart.md index 35f8fbb448..46a8351426 100644 --- a/_about/quickstart.md +++ b/_about/quickstart.md @@ -52,9 +52,9 @@ You'll need a special file, called a Compose file, that Docker Compose uses to d opensearch-node1 "./opensearch-docker…" opensearch-node1 running 0.0.0.0:9200->9200/tcp, 9300/tcp, 0.0.0.0:9600->9600/tcp, 9650/tcp opensearch-node2 "./opensearch-docker…" opensearch-node2 running 9200/tcp, 9300/tcp, 9600/tcp, 9650/tcp ``` -1. Query the OpenSearch REST API to verify that the service is running. You should use `-k` (also written as `--insecure`) to disable host name checking because the default security configuration uses demo certificates. Use `-u` to pass the default username and password (`admin:< Admin password >`). +1. Query the OpenSearch REST API to verify that the service is running. You should use `-k` (also written as `--insecure`) to disable host name checking because the default security configuration uses demo certificates. Use `-u` to pass the default username and password (`admin:`). ```bash - curl https://localhost:9200 -ku admin:< Admin password > + curl https://localhost:9200 -ku admin: ``` Sample response: ```json @@ -76,7 +76,7 @@ You'll need a special file, called a Compose file, that Docker Compose uses to d "tagline" : "The OpenSearch Project: https://opensearch.org/" } ``` -1. Explore OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the password is what you specified in the `docker-compose.yml`: `< Admin password >`. +1. Explore OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the password set in your`docker-compose.yml` in the `OPENSEARCH_INITIAL_ADMIN_PASSWORD=` setting. ## Create an index and field mappings using sample data @@ -100,18 +100,18 @@ Create an index and define field mappings using a dataset provided by the OpenSe ``` 1. Define the field mappings with the mapping file. ```bash - curl -H "Content-Type: application/x-ndjson" -X PUT "https://localhost:9200/ecommerce" -ku admin:< Admin password > --data-binary "@ecommerce-field_mappings.json" + curl -H "Content-Type: application/x-ndjson" -X PUT "https://localhost:9200/ecommerce" -ku admin: --data-binary "@ecommerce-field_mappings.json" ``` 1. Upload the index to the bulk API. ```bash - curl -H "Content-Type: application/x-ndjson" -X PUT "https://localhost:9200/ecommerce/_bulk" -ku admin:< Admin password > --data-binary "@ecommerce.json" + curl -H "Content-Type: application/x-ndjson" -X PUT "https://localhost:9200/ecommerce/_bulk" -ku admin: --data-binary "@ecommerce.json" ``` 1. Query the data using the search API. The following command submits a query that will return documents where `customer_first_name` is `Sonya`. ```bash - curl -H 'Content-Type: application/json' -X GET "https://localhost:9200/ecommerce/_search?pretty=true" -ku admin:< Admin password > -d' {"query":{"match":{"customer_first_name":"Sonya"}}}' + curl -H 'Content-Type: application/json' -X GET "https://localhost:9200/ecommerce/_search?pretty=true" -ku admin: -d' {"query":{"match":{"customer_first_name":"Sonya"}}}' ``` Queries submitted to the OpenSearch REST API will generally return a flat JSON by default. For a human readable response body, use the query parameter `pretty=true`. For more information about `pretty` and other useful query parameters, see [Common REST parameters]({{site.url}}{{site.baseurl}}/opensearch/common-parameters/). -1. Access OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the default password is `< Admin password >`. +1. Access OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the default password set in your `opensearch.yml` configuration under Explore OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the password set in your`docker-compose.yml` in the `OPENSEARCH_INITIAL_ADMIN_PASSWORD=` setting. 1. On the top menu bar, go to **Management > Dev Tools**. 1. In the left pane of the console, enter the following: ```json diff --git a/_api-reference/tasks.md b/_api-reference/tasks.md index 8ca34d15f4..5c3a41fd34 100644 --- a/_api-reference/tasks.md +++ b/_api-reference/tasks.md @@ -267,7 +267,7 @@ To associate requests with tasks for better tracking, you can provide a `X-Opaqu Usage: ```bash -curl -i -H "X-Opaque-Id: 111111" "https://localhost:9200/_tasks" -u 'admin:< Admin password >' --insecure +curl -i -H "X-Opaque-Id: 111111" "https://localhost:9200/_tasks" -u 'admin:' --insecure ``` {% include copy.html %} @@ -326,6 +326,6 @@ content-length: 768 This operation supports the same parameters as the `tasks` operation. The following example shows how you can associate `X-Opaque-Id` with specific tasks: ```bash -curl -i -H "X-Opaque-Id: 123456" "https://localhost:9200/_tasks?nodes=opensearch-node1" -u 'admin:< Admin password >' --insecure +curl -i -H "X-Opaque-Id: 123456" "https://localhost:9200/_tasks?nodes=opensearch-node1" -u 'admin:' --insecure ``` {% include copy.html %} diff --git a/_benchmark/quickstart.md b/_benchmark/quickstart.md index 4598837e03..0c23f74953 100644 --- a/_benchmark/quickstart.md +++ b/_benchmark/quickstart.md @@ -31,7 +31,7 @@ After installation, you can verify OpenSearch is running by going to `localhost: Use the following command to verify OpenSearch is running with SSL certificate checks disabled: ```bash -curl -k -u admin:< Admin password > https://localhost:9200 # the "-k" option skips SSL certificate checks +curl -k -u admin: https://localhost:9200 # the "-k" option skips SSL certificate checks { "name" : "147ddae31bf8.opensearch.org", diff --git a/_clients/javascript/index.md b/_clients/javascript/index.md index c10e2018d0..f8bf9a9619 100644 --- a/_clients/javascript/index.md +++ b/_clients/javascript/index.md @@ -48,7 +48,7 @@ To connect to the default OpenSearch host, create a client object with the addre var host = "localhost"; var protocol = "https"; var port = 9200; -var auth = "admin:< Admin password >"; // For testing only. Don't store credentials in code. +var auth = "admin:"; // For testing only. Don't store credentials in code. var ca_certs_path = "/full/path/to/root-ca.pem"; // Optional client certificates if you don't want to use HTTP basic authentication. @@ -360,7 +360,7 @@ The following sample program creates a client, adds an index with non-default se var host = "localhost"; var protocol = "https"; var port = 9200; -var auth = "admin:< Admin password >"; // For testing only. Don't store credentials in code. +var auth = "admin:"; // For testing only. Don't store credentials in code. var ca_certs_path = "/full/path/to/root-ca.pem"; // Optional client certificates if you don't want to use HTTP basic authentication. diff --git a/_monitoring-your-cluster/pa/index.md b/_monitoring-your-cluster/pa/index.md index 4d481a1f70..bb4f9c6c30 100644 --- a/_monitoring-your-cluster/pa/index.md +++ b/_monitoring-your-cluster/pa/index.md @@ -245,7 +245,7 @@ curl -XPOST http://localhost:9200/_plugins/_performanceanalyzer/rca/cluster/conf If you encounter the `curl: (52) Empty reply from server` response, run the following command to enable RCA: ```bash -curl -XPOST https://localhost:9200/_plugins/_performanceanalyzer/rca/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}' -u 'admin:< Admin password >' -k +curl -XPOST https://localhost:9200/_plugins/_performanceanalyzer/rca/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}' -u 'admin:' -k ``` ### Example API query and response diff --git a/_observing-your-data/log-ingestion.md b/_observing-your-data/log-ingestion.md index 2644da0594..61f427d30e 100644 --- a/_observing-your-data/log-ingestion.md +++ b/_observing-your-data/log-ingestion.md @@ -63,7 +63,7 @@ This should result in a single document being written to the OpenSearch cluster Run the following command to see one of the raw documents in the OpenSearch cluster: ```bash -curl -X GET -u 'admin:< Admin password >' -k 'https://localhost:9200/apache_logs/_search?pretty&size=1' +curl -X GET -u 'admin:' -k 'https://localhost:9200/apache_logs/_search?pretty&size=1' ``` The response should show the parsed log data: diff --git a/_observing-your-data/trace/getting-started.md b/_observing-your-data/trace/getting-started.md index cc2695be48..d1bffb7050 100644 --- a/_observing-your-data/trace/getting-started.md +++ b/_observing-your-data/trace/getting-started.md @@ -76,7 +76,7 @@ node-0.example.com | [2020-11-19T16:29:55,267][INFO ][o.e.c.m.MetadataMappingSe In a new terminal window, run the following command to see one of the raw documents in the OpenSearch cluster: ```bash -curl -X GET -u 'admin:< Admin password >' -k 'https://localhost:9200/otel-v1-apm-span-000001/_search?pretty&size=1' +curl -X GET -u 'admin:' -k 'https://localhost:9200/otel-v1-apm-span-000001/_search?pretty&size=1' ``` Navigate to `http://localhost:5601` in a web browser and choose **Trace Analytics**. You can see the results of your single click in the Jaeger HotROD web interface: the number of traces per API and HTTP method, latency trends, a color-coded map of the service architecture, and a list of trace IDs that you can use to drill down on individual operations. diff --git a/_reporting/rep-cli-env-var.md b/_reporting/rep-cli-env-var.md index c12a5b5a71..0c80c81ca5 100644 --- a/_reporting/rep-cli-env-var.md +++ b/_reporting/rep-cli-env-var.md @@ -30,7 +30,7 @@ Values from the command line argument have higher priority than the environment The following command requests a report with basic authentication in PNG format: ``` -opensearch-reporting-cli --url https://localhost:5601/app/dashboards#/view/7adfa750-4c81-11e8-b3d7-01146121b73d --format png --auth basic --credentials admin:< Admin password > +opensearch-reporting-cli --url https://localhost:5601/app/dashboards#/view/7adfa750-4c81-11e8-b3d7-01146121b73d --format png --auth basic --credentials admin: ``` Upon success, the report will download to the current directory. diff --git a/_search-plugins/sql/sql/index.md b/_search-plugins/sql/sql/index.md index c2279cbf46..7035b6d664 100644 --- a/_search-plugins/sql/sql/index.md +++ b/_search-plugins/sql/sql/index.md @@ -61,7 +61,7 @@ POST _plugins/_sql To run the preceding query in the command line, use the [curl](https://curl.haxx.se/) command: ```bash -curl -XPOST https://localhost:9200/_plugins/_sql -u 'admin:< Admin password >' -k -H 'Content-Type: application/json' -d '{"query": "SELECT * FROM my-index* LIMIT 50"}' +curl -XPOST https://localhost:9200/_plugins/_sql -u 'admin:' -k -H 'Content-Type: application/json' -d '{"query": "SELECT * FROM my-index* LIMIT 50"}' ``` {% include copy.html %} diff --git a/_security/access-control/cross-cluster-search.md b/_security/access-control/cross-cluster-search.md index e1f8aef879..5eb4aae99e 100644 --- a/_security/access-control/cross-cluster-search.md +++ b/_security/access-control/cross-cluster-search.md @@ -77,7 +77,7 @@ services: - discovery.type=single-node - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM - - "OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password >" # The initial admin password used by the demo configuration + - "OPENSEARCH_INITIAL_ADMIN_PASSWORD=" # The initial admin password used by the demo configuration ulimits: memlock: soft: -1 @@ -98,7 +98,7 @@ services: - discovery.type=single-node - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM - - "OPENSEARCH_INITIAL_ADMIN_PASSWORD=< Admin password >" # The initial admin password used by the demo configuration + - "OPENSEARCH_INITIAL_ADMIN_PASSWORD=" # The initial admin password used by the demo configuration ulimits: memlock: soft: -1 @@ -122,13 +122,13 @@ networks: After the clusters start, verify the names of each: ```json -curl -XGET -u 'admin:< Admin password >' -k 'https://localhost:9200' +curl -XGET -u 'admin:' -k 'https://localhost:9200' { "cluster_name" : "opensearch-ccs-cluster1", ... } -curl -XGET -u 'admin:< Admin password >' -k 'https://localhost:9250' +curl -XGET -u 'admin:' -k 'https://localhost:9250' { "cluster_name" : "opensearch-ccs-cluster2", ... @@ -156,7 +156,7 @@ docker inspect --format='{% raw %}{{range .NetworkSettings.Networks}}{{.IPAddres On the coordinating cluster, add the remote cluster name and the IP address (with port 9300) for each "seed node." In this case, you only have one seed node: ```json -curl -k -XPUT -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9250/_cluster/settings' -d ' +curl -k -XPUT -H 'Content-Type: application/json' -u 'admin:' 'https://localhost:9250/_cluster/settings' -d ' { "persistent": { "cluster.remote": { @@ -171,13 +171,13 @@ curl -k -XPUT -H 'Content-Type: application/json' -u 'admin:< Admin password >' On the remote cluster, index a document: ```bash -curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/books/_doc/1' -d '{"Dracula": "Bram Stoker"}' +curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:' 'https://localhost:9200/books/_doc/1' -d '{"Dracula": "Bram Stoker"}' ``` At this point, cross-cluster search works. You can test it using the `admin` user: ```bash -curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9250/opensearch-ccs-cluster1:books/_search?pretty' +curl -XGET -k -u 'admin:' 'https://localhost:9250/opensearch-ccs-cluster1:books/_search?pretty' { ... "hits": [{ @@ -194,8 +194,8 @@ curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9250/opensearch-c To continue testing, create a new user on both clusters: ```bash -curl -XPUT -k -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_security/api/internalusers/booksuser' -H 'Content-Type: application/json' -d '{"password":"password"}' -curl -XPUT -k -u 'admin:< Admin password >' 'https://localhost:9250/_plugins/_security/api/internalusers/booksuser' -H 'Content-Type: application/json' -d '{"password":"password"}' +curl -XPUT -k -u 'admin:' 'https://localhost:9200/_plugins/_security/api/internalusers/booksuser' -H 'Content-Type: application/json' -d '{"password":"password"}' +curl -XPUT -k -u 'admin:' 'https://localhost:9250/_plugins/_security/api/internalusers/booksuser' -H 'Content-Type: application/json' -d '{"password":"password"}' ``` Then run the same search as before with `booksuser`: @@ -220,8 +220,8 @@ curl -XGET -k -u booksuser:password 'https://localhost:9250/opensearch-ccs-clust Note the permissions error. On the remote cluster, create a role with the appropriate permissions, and map `booksuser` to that role: ```bash -curl -XPUT -k -u 'admin:< Admin password >' -H 'Content-Type: application/json' 'https://localhost:9200/_plugins/_security/api/roles/booksrole' -d '{"index_permissions":[{"index_patterns":["books"],"allowed_actions":["indices:admin/shards/search_shards","indices:data/read/search"]}]}' -curl -XPUT -k -u 'admin:< Admin password >' -H 'Content-Type: application/json' 'https://localhost:9200/_plugins/_security/api/rolesmapping/booksrole' -d '{"users" : ["booksuser"]}' +curl -XPUT -k -u 'admin:' -H 'Content-Type: application/json' 'https://localhost:9200/_plugins/_security/api/roles/booksrole' -d '{"index_permissions":[{"index_patterns":["books"],"allowed_actions":["indices:admin/shards/search_shards","indices:data/read/search"]}]}' +curl -XPUT -k -u 'admin:' -H 'Content-Type: application/json' 'https://localhost:9200/_plugins/_security/api/rolesmapping/booksrole' -d '{"users" : ["booksuser"]}' ``` Both clusters must have the user, but only the remote cluster needs the role and mapping; in this case, the coordinating cluster handles authentication (i.e. "Does this request include valid user credentials?"), and the remote cluster handles authorization (i.e. "Can this user access this data?"). diff --git a/_security/access-control/impersonation.md b/_security/access-control/impersonation.md index dd9481d623..4bf7ab689d 100644 --- a/_security/access-control/impersonation.md +++ b/_security/access-control/impersonation.md @@ -47,5 +47,5 @@ plugins.security.authcz.impersonation_dn: To impersonate another user, submit a request to the system with the HTTP header `opendistro_security_impersonate_as` set to the name of the user to be impersonated. A good test is to make a GET request to the `_plugins/_security/authinfo` URI: ```bash -curl -XGET -u 'admin:< Admin password >' -k -H "opendistro_security_impersonate_as: user_1" https://localhost:9200/_plugins/_security/authinfo?pretty +curl -XGET -u 'admin:' -k -H "opendistro_security_impersonate_as: user_1" https://localhost:9200/_plugins/_security/authinfo?pretty ``` diff --git a/_troubleshoot/index.md b/_troubleshoot/index.md index 27a30ba1f3..22c7a1018f 100644 --- a/_troubleshoot/index.md +++ b/_troubleshoot/index.md @@ -30,7 +30,7 @@ If you run legacy Kibana OSS scripts against OpenSearch Dashboards---for example In this case, your scripts likely include the `"kbn-xsrf: true"` header. Switch it to the `osd-xsrf: true` header: ``` -curl -XPOST -u 'admin:< Admin password >' 'https://DASHBOARDS_ENDPOINT/api/saved_objects/_import' -H 'osd-xsrf:true' --form file=@export.ndjson +curl -XPOST -u 'admin:' 'https://DASHBOARDS_ENDPOINT/api/saved_objects/_import' -H 'osd-xsrf:true' --form file=@export.ndjson ``` diff --git a/_tuning-your-cluster/availability-and-recovery/remote-store/index.md b/_tuning-your-cluster/availability-and-recovery/remote-store/index.md index 8850874336..5fd19f5cc2 100644 --- a/_tuning-your-cluster/availability-and-recovery/remote-store/index.md +++ b/_tuning-your-cluster/availability-and-recovery/remote-store/index.md @@ -86,7 +86,7 @@ curl -X POST "https://localhost:9200/_remotestore/_restore" -H 'Content-Type: ap **Restore all shards of a given index** ```bash -curl -X POST "https://localhost:9200/_remotestore/_restore?restore_all_shards=true" -ku admin:< Admin password > -H 'Content-Type: application/json' -d' +curl -X POST "https://localhost:9200/_remotestore/_restore?restore_all_shards=true" -ku admin: -H 'Content-Type: application/json' -d' { "indices": ["my-index"] } diff --git a/_tuning-your-cluster/index.md b/_tuning-your-cluster/index.md index 1608bbb66c..4a2a027d2b 100644 --- a/_tuning-your-cluster/index.md +++ b/_tuning-your-cluster/index.md @@ -177,7 +177,7 @@ less /var/log/opensearch/opensearch-cluster.log Perform the following `_cat` query on any node to see all the nodes formed as a cluster: ```bash -curl -XGET https://:9200/_cat/nodes?v -u 'admin:< Admin password >' --insecure +curl -XGET https://:9200/_cat/nodes?v -u 'admin:' --insecure ``` ``` diff --git a/_tuning-your-cluster/replication-plugin/auto-follow.md b/_tuning-your-cluster/replication-plugin/auto-follow.md index 66390303bc..828b835387 100644 --- a/_tuning-your-cluster/replication-plugin/auto-follow.md +++ b/_tuning-your-cluster/replication-plugin/auto-follow.md @@ -28,7 +28,7 @@ Replication rules are a collection of patterns that you create against a single Create a replication rule on the follower cluster: ```bash -curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/_autofollow?pretty' -d ' +curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:' 'https://localhost:9200/_plugins/_replication/_autofollow?pretty' -d ' { "leader_alias" : "my-connection-alias", "name": "my-replication-rule", @@ -46,13 +46,13 @@ If the Security plugin is disabled, you can leave out the `use_roles` parameter. To test the rule, create a matching index on the leader cluster: ```bash -curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9201/movies-0001?pretty' +curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:' 'https://localhost:9201/movies-0001?pretty' ``` And confirm its replica shows up on the follower cluster: ```bash -curl -XGET -u 'admin:< Admin password >' -k 'https://localhost:9200/_cat/indices?v' +curl -XGET -u 'admin:' -k 'https://localhost:9200/_cat/indices?v' ``` It might take several seconds for the index to appear. @@ -67,7 +67,7 @@ yellow open movies-0001 kHOxYYHxRMeszLjTD9rvSQ 1 1 0 To retrieve a list of existing replication rules that are configured on a cluster, send the following request: ```bash -curl -XGET -u 'admin:< Admin password >' -k 'https://localhost:9200/_plugins/_replication/autofollow_stats' +curl -XGET -u 'admin:' -k 'https://localhost:9200/_plugins/_replication/autofollow_stats' { "num_success_start_replication": 1, @@ -96,7 +96,7 @@ curl -XGET -u 'admin:< Admin password >' -k 'https://localhost:9200/_plugins/_re To delete a replication rule, send the following request to the follower cluster: ```bash -curl -XDELETE -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/_autofollow?pretty' -d ' +curl -XDELETE -k -H 'Content-Type: application/json' -u 'admin:' 'https://localhost:9200/_plugins/_replication/_autofollow?pretty' -d ' { "leader_alias" : "my-conection-alias", "name": "my-replication-rule" diff --git a/_tuning-your-cluster/replication-plugin/getting-started.md b/_tuning-your-cluster/replication-plugin/getting-started.md index 57842fb038..8c8fb12d4f 100644 --- a/_tuning-your-cluster/replication-plugin/getting-started.md +++ b/_tuning-your-cluster/replication-plugin/getting-started.md @@ -32,7 +32,7 @@ In addition, verify and add the distinguished names (DNs) of each follower clust First, get the node's DN from each follower cluster: ```bash -curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9200/_opendistro/_security/api/ssl/certs?pretty' +curl -XGET -k -u 'admin:' 'https://localhost:9200/_opendistro/_security/api/ssl/certs?pretty' { "transport_certificates_list": [ @@ -110,13 +110,13 @@ networks: After the clusters start, verify the names of each: ```bash -curl -XGET -u 'admin:< Admin password >' -k 'https://localhost:9201' +curl -XGET -u 'admin:' -k 'https://localhost:9201' { "cluster_name" : "leader-cluster", ... } -curl -XGET -u 'admin:< Admin password >' -k 'https://localhost:9200' +curl -XGET -u 'admin:' -k 'https://localhost:9200' { "cluster_name" : "follower-cluster", ... @@ -148,7 +148,7 @@ Cross-cluster replication follows a "pull" model, so most changes occur on the f On the follower cluster, add the IP address (with port 9300) for each seed node. Because this is a single-node cluster, you only have one seed node. Provide a descriptive name for the connection, which you'll use in the request to start replication: ```bash -curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/_cluster/settings?pretty' -d ' +curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:' 'https://localhost:9200/_cluster/settings?pretty' -d ' { "persistent": { "cluster": { @@ -167,13 +167,13 @@ curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' To get started, create an index called `leader-01` on the leader cluster: ```bash -curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9201/leader-01?pretty' +curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:' 'https://localhost:9201/leader-01?pretty' ``` Then start replication from the follower cluster. In the request body, provide the connection name and leader index that you want to replicate, along with the security roles you want to use: ```bash -curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/follower-01/_start?pretty' -d ' +curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:' 'https://localhost:9200/_plugins/_replication/follower-01/_start?pretty' -d ' { "leader_alias": "my-connection-alias", "leader_index": "leader-01", @@ -194,7 +194,7 @@ This command creates an identical read-only index named `follower-01` on the fol After replication starts, get the status: ```bash -curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/follower-01/_status?pretty' +curl -XGET -k -u 'admin:' 'https://localhost:9200/_plugins/_replication/follower-01/_status?pretty' { "status" : "SYNCING", @@ -217,13 +217,13 @@ The leader and follower checkpoint values begin as negative numbers and reflect To confirm that replication is actually happening, add a document to the leader index: ```bash -curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9201/leader-01/_doc/1?pretty' -d '{"The Shining": "Stephen King"}' +curl -XPUT -k -H 'Content-Type: application/json' -u 'admin:' 'https://localhost:9201/leader-01/_doc/1?pretty' -d '{"The Shining": "Stephen King"}' ``` Then validate the replicated content on the follower index: ```bash -curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9200/follower-01/_search?pretty' +curl -XGET -k -u 'admin:' 'https://localhost:9200/follower-01/_search?pretty' { ... @@ -243,13 +243,13 @@ curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9200/follower-01/ You can temporarily pause replication of an index if you need to remediate issues or reduce load on the leader cluster: ```bash -curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/follower-01/_pause?pretty' -d '{}' +curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:' 'https://localhost:9200/_plugins/_replication/follower-01/_pause?pretty' -d '{}' ``` To confirm that replication is paused, get the status: ```bash -curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/follower-01/_status?pretty' +curl -XGET -k -u 'admin:' 'https://localhost:9200/_plugins/_replication/follower-01/_status?pretty' { "status" : "PAUSED", @@ -263,7 +263,7 @@ curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_re When you're done making changes, resume replication: ```bash -curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/follower-01/_resume?pretty' -d '{}' +curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:' 'https://localhost:9200/_plugins/_replication/follower-01/_resume?pretty' -d '{}' ``` When replication resumes, the follower index picks up any changes that were made to the leader index while replication was paused. @@ -275,7 +275,7 @@ Note that you can't resume replication after it's been paused for more than 12 h When you no longer need to replicate an index, terminate replication from the follower cluster: ```bash -curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/follower-01/_stop?pretty' -d '{}' +curl -XPOST -k -H 'Content-Type: application/json' -u 'admin:' 'https://localhost:9200/_plugins/_replication/follower-01/_stop?pretty' -d '{}' ``` When you stop replication, the follower index un-follows the leader and becomes a standard index that you can write to. You can't restart replication after stopping it. @@ -283,7 +283,7 @@ When you stop replication, the follower index un-follows the leader and becomes Get the status to confirm that the index is no longer being replicated: ```bash -curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9200/_plugins/_replication/follower-01/_status?pretty' +curl -XGET -k -u 'admin:' 'https://localhost:9200/_plugins/_replication/follower-01/_status?pretty' { "status" : "REPLICATION NOT IN PROGRESS" diff --git a/_upgrade-to/upgrade-to.md b/_upgrade-to/upgrade-to.md index 462937cf5f..340055b214 100644 --- a/_upgrade-to/upgrade-to.md +++ b/_upgrade-to/upgrade-to.md @@ -87,7 +87,7 @@ If you are migrating an Open Distro for Elasticsearch cluster, we recommend firs # Elasticsearch OSS curl -XGET 'localhost:9200/_nodes/_all?pretty=true' # Open Distro for Elasticsearch with Security plugin enabled - curl -XGET 'https://localhost:9200/_nodes/_all?pretty=true' -u 'admin:< Admin password >' -k + curl -XGET 'https://localhost:9200/_nodes/_all?pretty=true' -u 'admin:' -k ``` Specifically, check the `nodes..version` portion of the response. Also check `_cat/indices?v` for a green status on all indexes. @@ -169,7 +169,7 @@ If you are migrating an Open Distro for Elasticsearch cluster, we recommend firs # Security plugin disabled curl -XGET 'localhost:9200/_nodes/_all?pretty=true' # Security plugin enabled - curl -XGET -k -u 'admin:< Admin password >' 'https://localhost:9200/_nodes/_all?pretty=true' + curl -XGET -k -u 'admin:' 'https://localhost:9200/_nodes/_all?pretty=true' ``` Specifically, check the `nodes..version` portion of the response. Also check `_cat/indices?v` for a green status on all indexes. From 2165d52ee7732bd3e5f66d1dc7ba460a103460e1 Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Tue, 23 Jan 2024 10:08:38 -0600 Subject: [PATCH 6/8] Update _about/quickstart.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- _about/quickstart.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_about/quickstart.md b/_about/quickstart.md index 46a8351426..b6586b43b1 100644 --- a/_about/quickstart.md +++ b/_about/quickstart.md @@ -52,7 +52,7 @@ You'll need a special file, called a Compose file, that Docker Compose uses to d opensearch-node1 "./opensearch-docker…" opensearch-node1 running 0.0.0.0:9200->9200/tcp, 9300/tcp, 0.0.0.0:9600->9600/tcp, 9650/tcp opensearch-node2 "./opensearch-docker…" opensearch-node2 running 9200/tcp, 9300/tcp, 9600/tcp, 9650/tcp ``` -1. Query the OpenSearch REST API to verify that the service is running. You should use `-k` (also written as `--insecure`) to disable host name checking because the default security configuration uses demo certificates. Use `-u` to pass the default username and password (`admin:`). +1. Query the OpenSearch REST API to verify that the service is running. You should use `-k` (also written as `--insecure`) to disable hostname checking because the default security configuration uses demo certificates. Use `-u` to pass the default username and password (`admin:`). ```bash curl https://localhost:9200 -ku admin: ``` From be5762d2d1f46661c7c5edc889fa40ce5052808e Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Tue, 23 Jan 2024 11:42:14 -0600 Subject: [PATCH 7/8] Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- _about/quickstart.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_about/quickstart.md b/_about/quickstart.md index b6586b43b1..e624fff7ad 100644 --- a/_about/quickstart.md +++ b/_about/quickstart.md @@ -76,7 +76,7 @@ You'll need a special file, called a Compose file, that Docker Compose uses to d "tagline" : "The OpenSearch Project: https://opensearch.org/" } ``` -1. Explore OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the password set in your`docker-compose.yml` in the `OPENSEARCH_INITIAL_ADMIN_PASSWORD=` setting. +1. Explore OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the default password is set in your `docker-compose.yml` file in the `OPENSEARCH_INITIAL_ADMIN_PASSWORD=` setting. ## Create an index and field mappings using sample data @@ -111,7 +111,7 @@ Create an index and define field mappings using a dataset provided by the OpenSe curl -H 'Content-Type: application/json' -X GET "https://localhost:9200/ecommerce/_search?pretty=true" -ku admin: -d' {"query":{"match":{"customer_first_name":"Sonya"}}}' ``` Queries submitted to the OpenSearch REST API will generally return a flat JSON by default. For a human readable response body, use the query parameter `pretty=true`. For more information about `pretty` and other useful query parameters, see [Common REST parameters]({{site.url}}{{site.baseurl}}/opensearch/common-parameters/). -1. Access OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the default password set in your `opensearch.yml` configuration under Explore OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the password set in your`docker-compose.yml` in the `OPENSEARCH_INITIAL_ADMIN_PASSWORD=` setting. +1. Access OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the default password is set in your `opensearch.yml` configuration under Explore OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the password is set in your `docker-compose.yml` file in the `OPENSEARCH_INITIAL_ADMIN_PASSWORD=` setting. 1. On the top menu bar, go to **Management > Dev Tools**. 1. In the left pane of the console, enter the following: ```json From e0f1c21dcf27e5bfe42f49ee0c5f94028512c211 Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Tue, 23 Jan 2024 12:25:18 -0600 Subject: [PATCH 8/8] Update quickstart.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- _about/quickstart.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_about/quickstart.md b/_about/quickstart.md index e624fff7ad..5c7da2950e 100644 --- a/_about/quickstart.md +++ b/_about/quickstart.md @@ -111,7 +111,7 @@ Create an index and define field mappings using a dataset provided by the OpenSe curl -H 'Content-Type: application/json' -X GET "https://localhost:9200/ecommerce/_search?pretty=true" -ku admin: -d' {"query":{"match":{"customer_first_name":"Sonya"}}}' ``` Queries submitted to the OpenSearch REST API will generally return a flat JSON by default. For a human readable response body, use the query parameter `pretty=true`. For more information about `pretty` and other useful query parameters, see [Common REST parameters]({{site.url}}{{site.baseurl}}/opensearch/common-parameters/). -1. Access OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the default password is set in your `opensearch.yml` configuration under Explore OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the password is set in your `docker-compose.yml` file in the `OPENSEARCH_INITIAL_ADMIN_PASSWORD=` setting. +1. Access OpenSearch Dashboards by opening `http://localhost:5601/` in a web browser on the same host that is running your OpenSearch cluster. The default username is `admin` and the password is set in your `docker-compose.yml` file in the `OPENSEARCH_INITIAL_ADMIN_PASSWORD=` setting. 1. On the top menu bar, go to **Management > Dev Tools**. 1. In the left pane of the console, enter the following: ```json @@ -162,4 +162,4 @@ OpenSearch will fail to start if your host's `vm.max_map_count` is too low. Revi opensearch-node1 | ERROR: [1] bootstrap checks failed opensearch-node1 | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] opensearch-node1 | ERROR: OpenSearch did not exit normally - check the logs at /usr/share/opensearch/logs/opensearch-cluster.log -``` \ No newline at end of file +```