In this Kubernetes tutorial, you will learn how to setup an EFK stack on Kubernetes with xpack security feature turned on for log streaming, log analysis and log monitoring.
When running multiple applications and services on a Kubernetes cluster, it makes more sense to stream all of your Kubernetes Cluster logs to one centralized logging infrastructure for easy log analysis. This can help you quickly sort through and analyse the heavy volume of log data produced by your Pods. By enabling xpack security feature, you can create and manage: users, roles, views, .... . This gives the possibility to give certain permission to view, edit, create dashboard, ... for a subset of application logs (indexes).
One popular centralized logging solution is the Elasticsearch, Fluentd, and Kibana (EFK) stack.
Elastic released some security features for free as part of the default distribution (Basic license) starting in Elastic Stack 6.8 and 7.1. This new feature offering includes the ability to encrypt network traffic using SSL, create and manage users, define roles that protect index and cluster-level access and fully secure Kibana.
Before we can begin with this guide, ensure you have the following things available to you:
Once you have these components set up, you are ready to begin with this guide. Let's go!
Let's begin with creating the necessary namespaces for each application.
Elasticsearch namespace:
FluentD namespace:
Kibana namespace:
Once we have created the yaml files, we can deploy the yaml files to the cluster:
Following output should appear:
We can validate if the namespaces are successfully created by running the following command:
The following output should appear:
First we need to deploy Elasticsearch. Elasticsearch is the core component in the stack, Fluentd and Kibana can not work without ElasticSearch.
You can find more information about Elasticsearch by clicking this link: https://www.elastic.co/what-is/elasticsearch
Let's first start with creating the RBAC resources. We will give the Elasticsearch ServiceAccount enough permission to explore the cluster and search for other Elasticsearch nodes.
We have our ServiceAccount, now we need to create the ClusterRole and bind it to the elasticsearch ServiceAccount.
Binding it to the ServiceAccount.
For the next step, we need a Service resource in the cluster. We will create a Headless Service resource with name elasticsearch in the namespace elasticsearch. When we associate our Elasticsearch StatefulSet with this Service, the Service will return DNS A records (service-name.namespace.svc.cluster.local) from that point to Elasticsearch Pods with the app: elasticsearch label. We will later configure these DNS records to our Statefulset, so Elasticsearch will search for these nodes.
Let's deploy our yaml files to the cluster:
Now let's see if the elasticsearch Service is deployed successfully:
Following output should appear:
We have defined some environment variables in our Statefulset resources. Some of the variables are from ConfigMap and some from a Secret.
Secret contains the password of the Elasticsearch admin user.
Run the following command to create a yaml file for the elasticsearch admin password:
If you don't have a SealedSecret controller, you can make a Secret resource by running the following command:
The command above will create the yaml file that needs to be deployed to the cluster.
The Configmap contains elasticsearch.yml block with extra Elasticsearch configuration. We add our Service DNS records to our discovery.seed_hosts, Elasticsearch will search for additional nodes.
This block will be mounted on the pod under /usr/share/elasticsearch/config/elasticsearch.yml location.
This volume mount is also declared in statefulset.yaml:
Deploy all your yaml files and make sure that Elasticsearch is running without any problems. If Elasticsearch is not running properly, you can tail the container logs or describe the Pod/Statefulset.
The following output should appear:
Let's see if the elasticsearch statefulset is deployed successfully.
Following output should appear:
Elasticsearch will fail to start when the security feature is ON without security configuration is configured!
Before we can enable the security feature, we need to generate certificates for elasticsearch nodes. Elasticsearch nodes will communicate securely with each other.
Run the following commands in the elasticsearch container.
Once we have generated our certificate and copied it from the container to our local machine, we will create a SealedSecret from the PEM file. We will mount this PEM file to the container later.
If you don't have SealedSecret controller, you can make a Secret resource by running the following command.
The command above will create the yaml file that needs to be deployed to the cluster.
When you have successfully created and deployed your certificate to the cluster, we can now enable security features.
Add the following configuration to the elasticsearch.yml configuration in the ConfigMap.yaml file:
Setting xpack.security.enabled to true will enable xpack security features. But only enabling this setting is not enough. We also need to mount and configure our newly generated certificates.
Mount the secret that contains the certificates to the StatefulSet:
Save and replace the ConfigMap and Statefulset. Wait till all pods have been terminated and started again.
if the pods are not restarted automatically, scale down statefulset and scale back up:
Tail logs and make sure that Elasticsearch is running healthy. If Elasticsearch is not running properly, you can tail the container logs or describe the Pod/Statefulset.
Your ConfigMap.yaml and Statefulset.yaml file should look like this.
If you would like to add readinessprobe, add the following to your Statefulset.yaml:
Now it is time to send container logs to Elasticsearch. We already created our fluentD namespace.
You can find more information about FluentD by clicking on the following link: https://www.fluentd.org/
Let's start again with creating the RBAC resources. We will give the FluentD ServiceAccount enough permission to explore the cluster and tail container logs.
Next, ClusterRole and bind it to the fluentd ServiceAccount.
Now bind it to the ServiceAccount.
The FLUENT_ELASTICSEARCH_PASSWORD is the same password we defined on Elasticsearch configuration (ELASTIC_PASSWORD).
Define the environment variables in the ConfigMap.yaml file, we also define some custom Fluentd configuration. This configuration will be mounted on the Fluentd container.
Deploy all your yaml files and make sure that Fluentd is running without any problems. If Fluentd is not running properly, you can tail the container logs or describe the Pod/DaemonSet.
Now we have Storage (Elasticsearch) and Data stream (Fluentd). Next, we need Kibana to view / edit / … the data.
The configuration of Kibana is almost the same as Elasticsearch and Fluentd. We define some environment variables to create our Kibana configuration.
Mainly these configurations are:
Kibana Username and Password
When we launch Kibana, the first question Kibana asks us is: "What is the username and password of the Kibana user". The username is kibana_system. The password can be retrieved from within the Elasticsearch container.
Exec to one of the Elasticsearch containers and run the following command:
Create a and deploy SealedSecret with the given password:
If you don't have SealedSecret controller, you can make a Secret resource by running the following command.
The command above will create the yaml file that needs to be deployed to the cluster.
Now it is time to create the Service resource. Kibana will be accessible on port 5601 and use the app: kibana label to select the Service’s target Pods.
Make sure that the ELASTICSEARCH_PASSWORD is defined in the environment variable and reads the right Secret.
We also need to create a new secret of the same PEM certificate we generated on the first step to mount on the container later.
If you don't have SealedSecret controller, you can make a Secret resource by running the following command.
The command above will create the yaml file that needs to be deployed to the cluster.
Deploy all configurations in the repository and tail the logs of Kibana.
And the Setup is Done!
You can now add your indexes, configure users, configure roles, ... and monitor your logs!
In this Kubernetes tutorial we’ve demonstrated how to set up and configure Elasticsearch, Fluentd, and Kibana (EFK Stack) on a Kubernetes cluster.
Centralize and make the life of the developers easy by exposing container logs in one centralized logging infrastructure!
If you enjoyed this tutorial and want more info about these topics in the future, make sure to follow us on LinkedIn!