Install Elasticsearch 8.x on Ubuntu operating system.
Installation
Install the several packages that we will use later during this article.
$ sudo apt-get install curl gnupg apt-transport-https unzip jq
Import Elasticsearch public key using curl.
$ curl https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
Define the Elasticsearch repository.
$ echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
Update package index.
$ sudo apt-get update
Hit:1 http://pl.archive.ubuntu.com/ubuntu jammy InRelease Get:2 http://pl.archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB] Get:3 https://artifacts.elastic.co/packages/8.x/apt stable InRelease [10.4 kB] Get:4 https://artifacts.elastic.co/packages/8.x/apt stable/main amd64 Packages [55.3 kB] Get:5 http://pl.archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB] Get:6 http://pl.archive.ubuntu.com/ubuntu jammy-security InRelease [110 kB] Fetched 402 kB in 1s (708 kB/s) Reading package lists... Done
Display package information.
$ apt info elasticsearch
Package: elasticsearch Version: 8.8.2 Priority: optional Section: web Source: elasticsearch Maintainer: Elasticsearch Team <info@elastic.co> Installed-Size: 1,236 MB Depends: bash (>= 4.1), lsb-base (>= 4), libc6, adduser, coreutils (>= 8.4) Conflicts: elasticsearch-oss Homepage: https://www.elastic.co/ License: Elastic-License Download-Size: 597 MB APT-Sources: https://artifacts.elastic.co/packages/8.x/apt stable/main amd64 Packages Description: Distributed RESTful search engine built for the cloud Reference documentation can be found at https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html and the 'Elasticsearch: The Definitive Guide' book can be found at https://www.elastic.co/guide/en/elasticsearch/guide/current/index.html N: There are 29 additional records. Please use the '-a' switch to see them.
List available Elasticsearch versions.
$ apt list --all-versions elasticsearch
Listing... Done elasticsearch/stable 8.8.2 amd64 elasticsearch/stable 8.8.1 amd64 elasticsearch/stable 8.8.0 amd64 elasticsearch/stable 8.7.1 amd64 elasticsearch/stable 8.7.0 amd64 elasticsearch/stable 8.6.2 amd64 elasticsearch/stable 8.6.1 amd64 elasticsearch/stable 8.6.0 amd64 elasticsearch/stable 8.5.3 amd64 elasticsearch/stable 8.5.2 amd64 elasticsearch/stable 8.5.1 amd64 elasticsearch/stable 8.5.0 amd64 elasticsearch/stable 8.4.3 amd64 elasticsearch/stable 8.4.2 amd64 elasticsearch/stable 8.4.1 amd64 elasticsearch/stable 8.4.0 amd64 elasticsearch/stable 8.3.3 amd64 elasticsearch/stable 8.3.2 amd64 elasticsearch/stable 8.3.1 amd64 elasticsearch/stable 8.3.0 amd64 elasticsearch/stable 8.2.3 amd64 elasticsearch/stable 8.2.2 amd64 elasticsearch/stable 8.2.1 amd64 elasticsearch/stable 8.2.0 amd64 elasticsearch/stable 8.1.3 amd64 elasticsearch/stable 8.1.2 amd64 elasticsearch/stable 8.1.1 amd64 elasticsearch/stable 8.1.0 amd64 elasticsearch/stable 8.0.1 amd64 elasticsearch/stable 8.0.0 amd64
Install latest Elasticsearch package.
$ sudo apt install elasticsearch
This service will be disabled by default.
$ systemctl status elasticsearch.service
○ elasticsearch.service - Elasticsearch Loaded: loaded (/lib/systemd/system/elasticsearch.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: https://www.elastic.co
Do not start it yet as it needs to be configured.
Define default superuser password
Set bootstrap superuser password on every node.
$ sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add bootstrap.password
Enter value for bootstrap.password: ************
Later, you will use elastic
user with above defined password.
Generate transport CA and certificate
Generate CA.
$ sudo /usr/share/elasticsearch/bin/elasticsearch-certutil ca
This tool assists you in the generation of X.509 certificates and certificate signing requests for use with SSL/TLS in the Elastic stack. The 'ca' mode generates a new 'certificate authority' This will create a new X.509 certificate and private key that can be used to sign certificate when running in 'cert' mode. Use the 'ca-dn' option if you wish to configure the 'distinguished name' of the certificate authority By default the 'ca' mode produces a single PKCS#12 output file which holds: * The CA certificate * The CA's private key If you elect to generate PEM format certificates (the -pem option), then the output will be a zip file containing individual files for the CA certificate and private key Please enter the desired output file [elastic-stack-ca.p12]: Enter password for elastic-stack-ca.p12 : ************
Move it to the configuration directory.
$ sudo mv /usr/share/elasticsearch/elastic-stack-ca.p12 /etc/elasticsearch/certs/
Update file permissions.
$ sudo chown :elasticsearch /etc/elasticsearch/certs/elastic-stack-ca.p12
$ sudo chmod 640 /etc/elasticsearch/certs/elastic-stack-ca.p12
Generate transport certificate.
$ sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca /etc/elasticsearch/certs/elastic-stack-ca.p12
This tool assists you in the generation of X.509 certificates and certificate signing requests for use with SSL/TLS in the Elastic stack. The 'cert' mode generates X.509 certificate and private keys. * By default, this generates a single certificate and key for use on a single instance. * The '-multiple' option will prompt you to enter details for multiple instances and will generate a certificate and key for each one * The '-in' option allows for the certificate generation to be automated by describing the details of each instance in a YAML file * An instance is any piece of the Elastic Stack that requires an SSL certificate. Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats may all require a certificate and private key. * The minimum required value for each instance is a name. This can simply be the hostname, which will be used as the Common Name of the certificate. A full distinguished name may also be used. * A filename value may be required for each instance. This is necessary when the name would result in an invalid file or directory name. The name provided here is used as the directory name (within the zip) and the prefix for the key and certificate files. The filename is required if you are prompted and the name is not displayed in the prompt. * IP addresses and DNS names are optional. Multiple values can be specified as a comma separated string. If no IP addresses or DNS names are provided, you may disable hostname verification in your SSL configuration. * All certificates generated by this tool will be signed by a certificate authority (CA) unless the --self-signed command line option is specified. The tool can automatically generate a new CA for you, or you can provide your own with the --ca or --ca-cert command line options. By default the 'cert' mode produces a single PKCS#12 output file which holds: * The instance certificate * The private key for the instance certificate * The CA certificate If you specify any of the following options: * -pem (PEM formatted output) * -multiple (generate multiple certificates) * -in (generate certificates from an input file) then the output will be be a zip file containing individual certificate/key files Enter password for CA (/etc/elasticsearch/certs/elastic-stack-ca.p12) : ************ Please enter the desired output file [elastic-certificates.p12]: Enter password for elastic-certificates.p12 : ************ Certificates written to /usr/share/elasticsearch/elastic-certificates.p12 This file should be properly secured as it contains the private key for your instance. This file is a self contained file and can be copied and used 'as is' For each Elastic product that you wish to configure, you should copy this '.p12' file to the relevant configuration directory and then follow the SSL configuration instructions in the product guide. For client applications, you may only need to copy the CA certificate and configure the client to trust this certificate.
Move it to the configuration directory.
$ sudo mv /usr/share/elasticsearch/elastic-certificates.p12 /etc/elasticsearch/certs/
Update file permissions.
$ sudo chown :elasticsearch /etc/elasticsearch/certs/elastic-certificates.p12
$ sudo chmod 640 /etc/elasticsearch/certs/elastic-certificates.p12
Copy these two files to other nodes.
Store certificate password on every node.
$ sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
Setting xpack.security.transport.ssl.keystore.secure_password already exists. Overwrite? [y/N] y Enter value for xpack.security.transport.ssl.keystore.secure_password: ************
Notice, this certificate does not contain domain name or IP address.
Generate http CA and certificates
Generate CA.
$ sudo /usr/share/elasticsearch/bin/elasticsearch-certutil ca
This tool assists you in the generation of X.509 certificates and certificate signing requests for use with SSL/TLS in the Elastic stack. The 'ca' mode generates a new 'certificate authority' This will create a new X.509 certificate and private key that can be used to sign certificate when running in 'cert' mode. Use the 'ca-dn' option if you wish to configure the 'distinguished name' of the certificate authority By default the 'ca' mode produces a single PKCS#12 output file which holds: * The CA certificate * The CA's private key If you elect to generate PEM format certificates (the -pem option), then the output will be a zip file containing individual files for the CA certificate and private key Please enter the desired output file [elastic-stack-ca.p12]: elastic-http-ca.p12 Enter password for elastic-http-ca.p12 : ************
Move it to the configuration directory.
$ sudo mv /usr/share/elasticsearch/elastic-http-ca.p12 /etc/elasticsearch/certs/
Update file permissions.
$ sudo chown :elasticsearch /etc/elasticsearch/certs/elastic-http-ca.p12
$ sudo chmod 640 /etc/elasticsearch/certs/elastic-http-ca.p12
Copy this file to other nodes.
Generate certificate on each node using generated CA.
$ sudo /usr/share/elasticsearch/bin/elasticsearch-certutil http
## Elasticsearch HTTP Certificate Utility The 'http' command guides you through the process of generating certificates for use on the HTTP (Rest) interface for Elasticsearch. This tool will ask you a number of questions in order to generate the right set of files for your needs. ## Do you wish to generate a Certificate Signing Request (CSR)? A CSR is used when you want your certificate to be created by an existing Certificate Authority (CA) that you do not control (that is, you don't have access to the keys for that CA). If you are in a corporate environment with a central security team, then you may have an existing Corporate CA that can generate your certificate for you. Infrastructure within your organisation may already be configured to trust this CA, so it may be easier for clients to connect to Elasticsearch if you use a CSR and send that request to the team that controls your CA. If you choose not to generate a CSR, this tool will generate a new certificate for you. That certificate will be signed by a CA under your control. This is a quick and easy way to secure your cluster with TLS, but you will need to configure all your clients to trust that custom CA. Generate a CSR? [y/N] ## Do you have an existing Certificate Authority (CA) key-pair that you wish to use to sign your certificate? If you have an existing CA certificate and key, then you can use that CA to sign your new http certificate. This allows you to use the same CA across multiple Elasticsearch clusters which can make it easier to configure clients, and may be easier for you to manage. If you do not have an existing CA, one will be generated for you. Use an existing CA? [y/N] y ## What is the path to your CA? Please enter the full pathname to the Certificate Authority that you wish to use for signing your new http certificate. This can be in PKCS#12 (.p12), JKS (.jks) or PEM (.crt, .key, .pem) format. CA Path: /etc/elasticsearch/certs/elastic-http-ca.p12 Reading a PKCS12 keystore requires a password. It is possible for the keystore's password to be blank, in which case you can simply press <ENTER> at the prompt Password for elasticsearch-cluster-ca.p12: ************ ## How long should your certificates be valid? Every certificate has an expiry date. When the expiry date is reached clients will stop trusting your certificate and TLS connections will fail. Best practice suggests that you should either: (a) set this to a short duration (90 - 120 days) and have automatic processes to generate a new certificate before the old one expires, or (b) set it to a longer duration (3 - 5 years) and then perform a manual update a few months before it expires. You may enter the validity period in years (e.g. 3Y), months (e.g. 18M), or days (e.g. 90D) For how long should your certificate be valid? [5y] ## Do you wish to generate one certificate per node? If you have multiple nodes in your cluster, then you may choose to generate a separate certificate for each of these nodes. Each certificate will have its own private key, and will be issued for a specific hostname or IP address. Alternatively, you may wish to generate a single certificate that is valid across all the hostnames or addresses in your cluster. If all of your nodes will be accessed through a single domain (e.g. node01.es.example.com, node02.es.example.com, etc) then you may find it simpler to generate one certificate with a wildcard hostname (*.es.example.com) and use that across all of your nodes. However, if you do not have a common domain name, and you expect to add additional nodes to your cluster in the future, then you should generate a certificate per node so that you can more easily generate new certificates when you provision new nodes. Generate a certificate per node? [y/N] y ## What is the name of node #1? This name will be used as part of the certificate file name, and as a descriptive name within the certificate. You can use any descriptive name that you like, but we recommend using the name of the Elasticsearch node. node #1 name: elastic-1 ## Which hostnames will be used to connect to elastic-1? These hostnames will be added as "DNS" names in the "Subject Alternative Name" (SAN) field in your certificate. You should list every hostname and variant that people will use to connect to your cluster over http. Do not list IP addresses here, you will be asked to enter them later. If you wish to use a wildcard certificate (for example *.es.example.com) you can enter that here. Enter all the hostnames that you need, one per line. When you are done, press <ENTER> once more to move on to the next step. You did not enter any hostnames. Clients are likely to encounter TLS hostname verification errors if they connect to your cluster using a DNS name. Is this correct [Y/n] ## Which IP addresses will be used to connect to elastic-1? If your clients will ever connect to your nodes by numeric IP address, then you can list these as valid IP "Subject Alternative Name" (SAN) fields in your certificate. If you do not have fixed IP addresses, or not wish to support direct IP access to your cluster then you can just press <ENTER> to skip this step. Enter all the IP addresses that you need, one per line. When you are done, press <ENTER> once more to move on to the next step. 192.168.8.153 You entered the following IP addresses. - 192.168.8.153 Is this correct [Y/n] ## Other certificate options The generated certificate will have the following additional configuration values. These values have been selected based on a combination of the information you have provided above and secure defaults. You should not need to change these values unless you have specific requirements. Key Name: elastic-1 Subject DN: CN=elastic-1 Key Size: 2048 Do you wish to change any of these options? [y/N] Generate additional certificates? [Y/n] n ## What password do you want for your private key(s)? Your private key(s) will be stored in a PKCS#12 keystore file named "http.p12". This type of keystore is always password protected, but it is possible to use a blank password. If you wish to use a blank password, simply press <ENTER> at the prompt below. Provide a password for the "http.p12" file: [<ENTER> for none] Repeat password to confirm: ## Where should we save the generated files? A number of files will be generated including your private key(s), public certificate(s), and sample configuration options for Elastic Stack products. These files will be included in a single zip archive. What filename should be used for the output zip file? [/usr/share/elasticsearch/elasticsearch-ssl-http.zip] Zip file written to /usr/share/elasticsearch/elasticsearch-ssl-http.zip
Extract and move certificate.
$ sudo unzip /usr/share/elasticsearch/elasticsearch-ssl-http.zip elasticsearch/http.p12 -d /etc/
Archive: /usr/share/elasticsearch/elasticsearch-ssl-http.zip inflating: /etc/elasticsearch/http.p12
$ sudo mv /etc/elasticsearch/http.p12 /etc/elasticsearch/certs/
Update certificate permissions.
$ sudo chown :elasticsearch /etc/elasticsearch/certs/http.p12
$ sudo chmod 640 /etc/elasticsearch/certs/http.p12
Remove ZIP file.
$ sudo rm /usr/share/elasticsearch/elasticsearch-ssl-http.zip
Store http certificate password on every node.
$ sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
Setting xpack.security.http.ssl.keystore.secure_password already exists. Overwrite? [y/N] y Enter value for xpack.security.http.ssl.keystore.secure_password: ************
Configure Elasticsearch service
Elasticsearch automatically sets the JVM heap size, so I will skip this part.
Inspect initial configuration.
$ sudo cat /etc/elasticsearch/elasticsearch.yml|
# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # #cluster.name: my-application # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # #node.name: node-1 # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /var/lib/elasticsearch # # Path to log files: # path.logs: /var/log/elasticsearch # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # By default Elasticsearch is only accessible on localhost. Set a different # address here to expose this node on the network: # #network.host: 192.168.0.1 # # By default Elasticsearch listens for HTTP traffic on the first free port it # finds starting at 9200. Set a specific HTTP port here: # #http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # #discovery.seed_hosts: ["host1", "host2"] # # Bootstrap the cluster using an initial set of master-eligible nodes: # #cluster.initial_master_nodes: ["node-1", "node-2"] # # For more information, consult the discovery and cluster formation module documentation. # # ---------------------------------- Various ----------------------------------- # # Allow wildcard deletion of indices: # #action.destructive_requires_name: false #----------------------- BEGIN SECURITY AUTO CONFIGURATION ----------------------- # # The following settings, TLS certificates, and keys have been automatically # generated to configure Elasticsearch security features on 22-07-2023 16:55:29 # # -------------------------------------------------------------------------------- # Enable security features xpack.security.enabled: true xpack.security.enrollment.enabled: true # Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents xpack.security.http.ssl: enabled: true keystore.path: certs/http.p12 # Enable encryption and mutual authentication between cluster nodes xpack.security.transport.ssl: enabled: true verification_mode: certificate keystore.path: certs/transport.p12 truststore.path: certs/transport.p12 # Create a new cluster with the current node only # Additional nodes can still join the cluster later cluster.initial_master_nodes: ["jammy"] # Allow HTTP API connections from anywhere # Connections are encrypted and require user authentication http.host: 0.0.0.0 # Allow other nodes to join the cluster from anywhere # Connections are encrypted and mutually authenticated #transport.host: 0.0.0.0 #----------------------- END SECURITY AUTO CONFIGURATION -------------------------
Edit /etc/elasticsearch/elasticsearch.yml
on each node to alter initial settings like listen address, cluster, and node name.
node: name: elastic-1 roles: [ master, data ] network: host: 192.168.8.153 cluster: name: elasticsearch-cluster initial_master_nodes: - elastic-1 - elastic-2 - elastic-3 discovery: seed_hosts: - 192.168.8.153 - 192.168.8.159 - 192.168.8.163 bootstrap: memory_lock: "true" path: data: /var/lib/elasticsearch logs: /var/log/elasticsearch xpack: security: autoconfiguration: enabled: false enrollment: enabled: false transport: ssl: enabled: true verification_mode: certificate client_authentication: required keystore: path: certs/elastic-certificates.p12 http: ssl: enabled: true keystore: path: certs/http.p12
Use master
and data
roles on the first thee nodes, data
otherwise.
Start service
Start service on every node.
$ sudo systemctl enable --now elasticsearch.service
Created symlink /etc/systemd/system/multi-user.target.wants/elasticsearch.service → /lib/systemd/system/elasticsearch.service.
Store http CA locally and update CAs list.
$ sudo openssl pkcs12 -in /etc/elasticsearch/certs/elastic-http-ca.p12 -nodes -passin pass:************ | sed -n -e "/-----BEGIN CERTIFICATE-----/,/-----END CERTIFICATE-----/p" | sudo tee /etc/ssl/certs/elastic-cluster.pem
$ sudo update-ca-certificates --fresh
Clearing symlinks in /etc/ssl/certs... done. Updating certificates in /etc/ssl/certs... rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL 137 added, 0 removed; done. Running hooks in /etc/ca-certificates/update.d... done.
Verify that service started.
$ curl -u elastic:*********** https://192.168.8.153:9200
{ "name" : "elastic-1", "cluster_name" : "elasticsearch-cluster", "cluster_uuid" : "K884_vJjQ72580M3uMGsAg", "version" : { "number" : "8.8.2", "build_flavor" : "default", "build_type" : "deb", "build_hash" : "98e1271edf932a480e4262a471281f1ee295ce6b", "build_date" : "2023-06-26T05:16:16.196344851Z", "build_snapshot" : false, "lucene_version" : "9.6.0", "minimum_wire_compatibility_version" : "7.17.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "You Know, for Search" }
Inspect service status.
$ curl -u elastic:*********** https://192.168.8.153:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 192.168.8.166 49 96 10 0.35 0.38 0.16 d - elastic-5 192.168.8.159 16 95 3 0.33 0.43 0.28 dm - elastic-2 192.168.8.163 33 96 13 0.47 0.49 0.20 dm * elastic-3 192.168.8.153 16 95 14 0.40 0.51 0.23 dm - elastic-1 192.168.8.165 29 96 13 0.37 0.44 0.19 d - elastic-4
Create first user
I will skip file based authentication as it requires performing operations on each node separately.
Use native authentication to create first user.
$ curl -X POST -u elastic:*********** "https://192.168.8.153:9200/_security/user/milosz?pretty" \ -H 'Content-Type: application/json' \ -d '{ "password" : "***********", "roles" : [ "superuser" ] }'
{ "created" : true }
Perform sample API requests.
$ curl -u milosz:*********** https://192.168.8.159:9200/_cluster/health?pretty
{ "cluster_name" : "elasticsearch-cluster", "status" : "green", "timed_out" : false, "number_of_nodes" : 5, "number_of_data_nodes" : 5, "active_primary_shards" : 4, "active_shards" : 8, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 }
$ curl -s -u milosz:*********** https://192.168.8.163:9200/_security/role?pretty | jq --raw-output 'keys[] as $k | "\($k)"'
apm_system apm_user beats_admin beats_system data_frame_transforms_admin data_frame_transforms_user editor enrich_user ingest_admin kibana_admin kibana_system kibana_user logstash_admin logstash_system machine_learning_admin machine_learning_user monitoring_user remote_monitoring_agent remote_monitoring_collector reporting_user rollup_admin rollup_user snapshot_user superuser transform_admin transform_user transport_client viewer watcher_admin watcher_user
Configure metrics on each node
Install metricbeat
.
$ sudo apt install metricbeat
Alter password for remote_monitoring_user
user.
$ curl -X POST -u milosz:*********** "https://192.168.8.153:9200/_security/user/remote_monitoring_user/_password?pretty" \ -H 'Content-Type: application/json' \ -d '{ "password" : "***********" }'
Enable elasticsearch-xpack
module.
$ sudo metricbeat modules enable elasticsearch-xpack
Enabled elasticsearch-xpack
Configure module.
$ sudo tee /etc/metricbeat/modules.d/elasticsearch-xpack.yml <<EOF # Module: elasticsearch # Docs: https://www.elastic.co/guide/en/beats/metricbeat/8.8/metricbeat-module-elasticsearch.html - module: elasticsearch xpack.enabled: true period: 10s hosts: ["https://192.168.8.153:9200"] username: "remote_monitoring_user" password: "***********" EOF
Edit /etc/metricbeat/metricbeat.yml
configuration file to alter metrics output.
... output.elasticsearch: hosts: ["https://192.168.8.153:9200"] ## Monitoring cluster protocol: "https" username: "remote_monitoring_user" password: "***********" ...
Enable metricbeat
.
$ sudo systemctl enable --now metricbeat
Synchronizing state of metricbeat.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable metricbeat Created symlink /etc/systemd/system/multi-user.target.wants/metricbeat.service → /lib/systemd/system/metricbeat.service.
Kibana
Install kibana
on any node.
$ sudo apt install kibana
Alter password for kibana_system
user.
$ curl -X POST -u milosz:milosz "https://192.168.8.153:9200/_security/user/kibana_system/_password?pretty" \ -H 'Content-Type: application/json' \ -d '{ "password" : "***********" }'
Inspect kibana configuration file.
$ cat /etc/kibana/kibana.yml
# For more configuration options see the configuration guide for Kibana in # https://www.elastic.co/guide/index.html # =================== System: Kibana Server =================== # Kibana is served by a back end server. This setting specifies the port to use. #server.port: 5601 # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values. # The default is 'localhost', which usually means remote machines will not be able to connect. # To allow connections from remote users, set this parameter to a non-loopback address. #server.host: "localhost" # Enables you to specify a path to mount Kibana at if you are running behind a proxy. # Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath # from requests it receives, and to prevent a deprecation warning at startup. # This setting cannot end in a slash. #server.basePath: "" # Specifies whether Kibana should rewrite requests that are prefixed with # `server.basePath` or require that they are rewritten by your reverse proxy. # Defaults to `false`. #server.rewriteBasePath: false # Specifies the public URL at which Kibana is available for end users. If # `server.basePath` is configured this URL should end with the same basePath. #server.publicBaseUrl: "" # The maximum payload size in bytes for incoming server requests. #server.maxPayload: 1048576 # The Kibana server's name. This is used for display purposes. #server.name: "your-hostname" # =================== System: Kibana Server (Optional) =================== # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively. # These settings enable SSL for outgoing requests from the Kibana server to the browser. #server.ssl.enabled: false #server.ssl.certificate: /path/to/your/server.crt #server.ssl.key: /path/to/your/server.key # =================== System: Elasticsearch =================== # The URLs of the Elasticsearch instances to use for all your queries. #elasticsearch.hosts: ["http://localhost:9200"] # If your Elasticsearch is protected with basic authentication, these settings provide # the username and password that the Kibana server uses to perform maintenance on the Kibana # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which # is proxied through the Kibana server. #elasticsearch.username: "kibana_system" #elasticsearch.password: "pass" # Kibana can also authenticate to Elasticsearch via "service account tokens". # Service account tokens are Bearer style tokens that replace the traditional username/password based configuration. # Use this token instead of a username/password. # elasticsearch.serviceAccountToken: "my_token" # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of # the elasticsearch.requestTimeout setting. #elasticsearch.pingTimeout: 1500 # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value # must be a positive integer. #elasticsearch.requestTimeout: 30000 # The maximum number of sockets that can be used for communications with elasticsearch. # Defaults to `Infinity`. #elasticsearch.maxSockets: 1024 # Specifies whether Kibana should use compression for communications with elasticsearch # Defaults to `false`. #elasticsearch.compression: false # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side # headers, set this value to [] (an empty list). #elasticsearch.requestHeadersWhitelist: [ authorization ] # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten # by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration. #elasticsearch.customHeaders: {} # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable. #elasticsearch.shardTimeout: 30000 # =================== System: Elasticsearch (Optional) =================== # These files are used to verify the identity of Kibana to Elasticsearch and are required when # xpack.security.http.ssl.client_authentication in Elasticsearch is set to required. #elasticsearch.ssl.certificate: /path/to/your/client.crt #elasticsearch.ssl.key: /path/to/your/client.key # Enables you to specify a path to the PEM file for the certificate # authority for your Elasticsearch instance. #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ] # To disregard the validity of SSL certificates, change this setting's value to 'none'. #elasticsearch.ssl.verificationMode: full # =================== System: Logging =================== # Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info' #logging.root.level: debug # Enables you to specify a file where Kibana stores log output. logging: appenders: file: type: file fileName: /var/log/kibana/kibana.log layout: type: json root: appenders: - default - file # layout: # type: json # Logs queries sent to Elasticsearch. #logging.loggers: # - name: elasticsearch.query # level: debug # Logs http responses. #logging.loggers: # - name: http.server.response # level: debug # Logs system usage information. #logging.loggers: # - name: metrics.ops # level: debug # =================== System: Other =================== # The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data #path.data: data # Specifies the path where Kibana creates the process ID file. pid.file: /run/kibana/kibana.pid # Set the interval in milliseconds to sample system and process performance # metrics. Minimum is 100ms. Defaults to 5000ms. #ops.interval: 5000 # Specifies locale to be used for all localizable strings, dates and number formats. # Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR". #i18n.locale: "en" # =================== Frequently used (Optional)=================== # =================== Saved Objects: Migrations =================== # Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings. # The number of documents migrated at a time. # If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`, # use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch. #migrations.batchSize: 1000 # The maximum payload size for indexing batches of upgraded saved objects. # To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch. # This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length` # configuration option. Default: 100mb #migrations.maxBatchSizeBytes: 100mb # The number of times to retry temporary migration failures. Increase the setting # if migrations fail frequently with a message such as `Unable to complete the [...] step after # 15 attempts, terminating`. Defaults to 15 #migrations.retryAttempts: 15 # =================== Search Autocomplete =================== # Time in milliseconds to wait for autocomplete suggestions from Elasticsearch. # This value must be a whole number greater than zero. Defaults to 1000ms #unifiedSearch.autocomplete.valueSuggestions.timeout: 1000 # Maximum number of documents loaded by each shard to generate autocomplete suggestions. # This value must be a whole number greater than zero. Defaults to 100_000 #unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000
Create minimal configuration.
$ sudo tee /etc/kibana/kibana.yml <<EOF server.host: "192.168.8.153" elasticsearch.hosts: ["https://192.168.8.153:9200"] elasticsearch.username: "kibana_system" elasticsearch.password: "***********" elasticsearch.ssl.certificateAuthorities: [ "/etc/ssl/certs/elastic-cluster.pem" ] elasticsearch.ssl.verificationMode: full monitoring.ui.ccs.enabled: false logging: appenders: file: type: file fileName: /var/log/kibana/kibana.log layout: type: json root: appenders: - default - file pid.file: /run/kibana/kibana.pid EOF
Start service.
$ sudo systemctl enable --now kibana
Created symlink /etc/systemd/system/multi-user.target.wants/kibana.service → /lib/systemd/system/kibana.service.
Wait a minute as the setup process will executed and open confired addresshttp://192.168.8.153:5601 .

Additional notes
This article has some limitations to cover as much as possible, but keep it relatively short.
Metricbeat and kibana would benefit from using additional localhost
, 127.0.0.1
entries inside certificates.
See Elasticsearch Guide for more details.