Get AWS session
1
| eval $(maws li "Team 10")
|
1
| retrieved credentials writing to profile 273854932432_Mesosphere-PowerUser
|
Connect to cluster
1
2
| export CLUSTER_URL="http://mpereira-elasticl-1d1egavuxe297-997777397.eu-central-1.elb.amazonaws.com/"
dcos cluster setup --insecure --username=bootstrapuser --password=deleteme "${CLUSTER_URL}"
|
Install the Enterprise DC/OS CLI if you don’t have it (for dcos security
commands)
1
| dcos package install --yes dcos-enterprise-cli
|
1
2
3
4
5
| Extracting "dcos-core-cli"...
Warning: The attached cluster is running DC/OS 1.13 but this CLI only supports DC/OS 1.12.
certified-services
Installing CLI subcommand for package [dcos-enterprise-cli] version [1.12-patch.0]
New commands available: dcos backup, dcos security, dcos license
|
Create keypair
1
| dcos security org service-accounts keypair elastic.private.pem elastic.public.pem
|
Create service account
The service account name that is configured for your service
(service.service_account
in config.json) must match the service
account name specified as this command’s last parameter.
1
| service_account_name="elastic"
|
1
| dcos security org service-accounts create -p elastic.public.pem -d "Elastic service account" "${service_account_name}"
|
Create secret
The second argument to this command should also match the
service.service_account
in config.json.
1
| secret_name="elastic-secret"
|
Additionally, when working with a strict DC/OS cluster one must
add the --strict
parameter to the dcos security secrets create-sa-secret
command.
1
| dcos security secrets create-sa-secret elastic.private.pem "${service_account_name}" "${secret_name}"
|
Add service account to superusers
group
1
| dcos security org groups add_user superusers elastic
|
From here on out, commands will depend on whether the Elastic
service will be configured with transport encryption enabled. If
enabled, the Elasticsearch cluster will both encrypt communications
between nodes and HTTP client requests. More details in the
Elasticsearch documentation.
Choose a name for your Elastic service
1
| service_name="/production/elastic"
|
Option 1) Without TLS
TODO
Option 2) With TLS
Install Elastic
1
2
3
4
5
6
7
8
9
10
| dcos package install elastic --yes --options=<(echo "{
\"service\": {
\"name\": \"${service_name}\",
\"service_account\": \"${service_account_name}\",
\"service_account_secret\": \"${secret_name}\",
\"virtual_network_enabled\": true,
\"security\": {\"transport_encryption\": {\"enabled\": true}}
},
\"elasticsearch\": {\"xpack_security_enabled\": true}
}")
|
Wait for the install to COMPLETE
1
| dcos elastic --name "${service_name}" plan status deploy
|
Check out service endpoints
1
2
3
| dcos elastic --name="${service_name}" endpoints master-http
dcos elastic --name="${service_name}" endpoints data-http
dcos elastic --name="${service_name}" endpoints coordinator-http
|
Get master-0-node
task ID and coordinator VIP endpoint
1
2
| master_0_task_id="$(dcos elastic --name="${service_name}" pod info master-0 | jq -r '.[0].info.taskId.value')"
coordinator_vip="$(dcos elastic --name="${service_name}" endpoints coordinator-http | jq -r '.vip')"
|
Test access: cluster health
1
2
3
4
| dcos task exec "${master_0_task_id}" \
/opt/mesosphere/bin/curl -si \
-H 'Content-type: application/json' \
"https://${coordinator_vip}/_cluster/health?pretty"
|
Get license
Elasticsearch 6.3.2 installs with the basic license by default.
1
2
3
4
| dcos task exec "${master_0_task_id}" \
/opt/mesosphere/bin/curl -si \
-H 'Content-type: application/json' \
"https://${coordinator_vip}/_xpack/license?pretty"
|
Start trial license
Elasticsearch requires that a non-basic license is active to
allow for security features to be used. We’re gonna be setting
up passwords for users so we’ll start a trial license.
1
2
3
4
5
| dcos task exec "${master_0_task_id}" \
/opt/mesosphere/bin/curl -si \
-XPOST \
-H 'Content-type: application/json' \
"https://${coordinator_vip}/_xpack/license/start_trial?acknowledge=true&pretty"
|
Get master-0-node
endpoint
1
| master_0_endpoint="$(dcos elastic --name="${service_name}" endpoints master-http | jq -r '.dns[0]')"
|
Set up passwords
This will invoke the elasticsearch-setup-passwords command in
the task running the Elasticsearch master-0-node
process.
1
2
3
4
5
6
| dcos task exec "${master_0_task_id}" bash -c "
set -x
export JAVA_HOME=\$(ls -d \${MESOS_SANDBOX}/jdk*/jre/)
ELASTICSEARCH_PATH=\$(ls -d \${MESOS_SANDBOX}/elasticsearch-*/)
\${ELASTICSEARCH_PATH}/bin/elasticsearch-setup-passwords auto --batch --verbose --url https://${master_0_endpoint}
" | tee -a elasticsearch_setup_passwords_output.txt
|
1
2
| elastic_password=$(grep 'PASSWORD elastic' elasticsearch_setup_passwords_output.txt | awk -F' = ' '{print $2}' | tail -n1)
kibana_password=$(grep 'PASSWORD kibana' elasticsearch_setup_passwords_output.txt | awk -F' = ' '{print $2}' | tail -n1)
|
Now that the Elasticsearch cluster is configured with
authentication, health-check requests will need to have
credentials. The command below configures the service to use
credentials for health-checks.
1
2
3
| dcos elastic --name "${service_name}" update start --options=<(echo "{
\"elasticsearch\": {\"health_user_password\": \"${elastic_password}\"}
}")
|
Wait for the update to COMPLETE
1
| dcos elastic --name "${service_name}" update status
|
Get master-0-node
task ID again
After the update completed a new master-0-node task will be
running. Let’s get the task ID for it.
1
| master_0_task_id="$(dcos elastic --name="${service_name}" pod info master-0 | jq -r '.[0].info.taskId.value')"
|
Test access with credentials
1
2
3
4
5
| dcos task exec "${master_0_task_id}" \
/opt/mesosphere/bin/curl -si \
-u "elastic:${elastic_password}" \
-H 'Content-type: application/json' \
"https://${coordinator_vip}/_cluster/health?pretty"
|
Choose a name for your Kibana service
1
| kibana_service_name="/production/kibana"
|
Install Kibana
1
2
3
4
5
6
7
8
9
10
11
| dcos package install kibana --yes --options=<(echo "{
\"service\": {
\"name\": \"${kibana_service_name}\"
},
\"kibana\": {
\"password\": \"${kibana_password}\",
\"elasticsearch_tls\": true,
\"elasticsearch_url\": \"https://${coordinator_vip}\",
\"elasticsearch_xpack_security_enabled\": true
}
}")
|
Wait for the Kibana server to be available
The number of healthy tasks should be 1
.
1
| dcos marathon app show "${kibana_service_name}" | jq '.tasksHealthy'
|
Add EdgeLB package repositories
1
2
| dcos package repo add edgelb https://downloads.mesosphere.com/edgelb/v1.3.0/assets/stub-universe-edgelb.json
dcos package repo add edgelb-pool https://downloads.mesosphere.com/edgelb-pool/v1.3.0/assets/stub-universe-edgelb-pool.json
|
Install EdgeLB
1
| dcos package install --yes edgelb
|
Wait for it to be available
The ping
command below should eventually output a pong
.
1
| dcos edgelb --name edgelb ping
|
Install Kibana EdgeLB pool
This command will create an EdgeLB pool task running on one of
your DC/OS cluster’s public agents, which allows one to access
Kibana from outside the cluster network, given that the selected
port on that agent machine is open.
In this example we’ll expose Kibana through HTTP, on port 80
.
It will be accessible on http://$public%5Fagent%5Fip%5For%5Furl:80.
Note that in this example:
- the pool name is
kibana
- its frontend name is
kibana-backend
- its backend name is
kibana-backend
It is not a requirement that these match anything related to the
actual Kibana service, so one could name them differently.
If a different pool name is given, dcos edgelb
commands will
require that one passes them as an argument. We’ll see an
example in a bit.
The pool fields that actually map to the actual Kibana service
are under haproxy.backends
:
rewriteHttp.path.fromPath
should match the Kibana Marathon
app service pathservices.endpoint.portName
should match the Kibana Marathon
app port nameservices.marathon.serviceID
should match the Kibana service
name
Let’s get the remaining configuration parameters that will map
the EdgeLB pool to the actual Kibana service. We’ll use them in
the pool configuration.
1
2
| kibana_service_path="/service/${kibana_service_name}"
kibana_port_name="$(dcos marathon app show "${kibana_service_name}" | jq -r '.portDefinitions[0].name')"
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
| dcos edgelb create <(echo "{
\"apiVersion\": \"V2\",
\"role\": \"slave_public\",
\"name\": \"kibana\",
\"count\": 1,
\"haproxy\": {
\"stats\": {
\"bindPort\": 9090
},
\"frontends\": [
{
\"bindPort\": ${kibana_proxy_port},
\"linkBackend\": {
\"defaultBackend\": \"kibana-backend\"
},
\"protocol\": \"HTTP\"
}
],
\"backends\": [
{
\"name\": \"kibana-backend\",
\"protocol\": \"HTTP\",
\"rewriteHttp\": {
\"path\": {
\"fromPath\": \"${kibana_service_path}\",
\"toPath\": \"/\"
}
},
\"services\": [
{
\"marathon\": {
\"serviceID\": \"${kibana_service_name}\"
},
\"endpoint\": {
\"portName\": \"${kibana_port_name}\"
}
}
]
}
]
}
}")
|
Wait for Kibana EdgeLB pool to be available
This will take a few seconds.
1
| dcos edgelb status kibana
|
Check Kibana EdgeLB pool
1
| dcos edgelb show kibana
|
At this point, the EdgeLB proxy is all set up for us to access
Kibana.
Get public agent IP address
This step requires that you have SSH access to the DC/OS cluster
nodes. Make sure you do before proceding.
Here we’re using the kibana
pool name in the dcos edgelb status
command. If you named the pool something different make
sure to use it.
1
2
| agent_private_ip="$(dcos edgelb status kibana --json | jq -r '.[0].status.containerStatus.networkInfos[0].ipAddresses[0].ipAddress')"
agent_public_ip="$(dcos node ssh --option StrictHostKeyChecking=no --option LogLevel=quiet --master-proxy --private-ip="${agent_private_ip}" "curl -s ifconfig.co")"
|
Authenticate with Kibana
Now that we have the public agent IP address where the EdgeLB
Kibana pool task is running, we’re able to access Kibana. First,
access http://$public%5Fagent%5Fip%5For%5Faddress/login to authenticate
with the Kibana server. Use the credentials created in the “Set
up passwords” step.
1
| kibana_url="http://${agent_public_ip}"
|
1
| kibana_login_url="${kibana_url}/login"
|
1
| command -v xdg-open && xdg-open "${kibana_login_url}" || open "${kibana_login_url}"
|
Access Kibana
After authenticating, Kibana should be available at
http://$public%5Fagent%5Fip%5For%5Furl/service/kibana/app/kibana.
1
| kibana_authenticated_url="${kibana_url}/service/${kibana_service_name}/app/kibana"
|
1
| command -v xdg-open && xdg-open "${kibana_authenticated_url}" || open "${kibana_authenticated_url}"
|