Welcome!

Blog Feed Post

Securing Elasticsearch and Kibana with Search Guard for free

Note: This is a guest post by Jochen Kressin, the CTO of floragunn GmbH, the makers of Search Guard, an open-source X-Pack Security alternative.

In this article, we show you how to secure Elasticsearch and Kibana for free using the Community edition of Search Guard. We start with a vanilla Elasticsearch and Kibana setup, install and configure Search Guard for Elasticsearch, and use the Search Guard Kibana plugin to add session management capabilities to Kibana.

Prerequisites

As a prerequisite, install the latest Elasticsearch and Kibana version. At the time of writing, this is 5.4.0.

Installing Elasticsearch

https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html

Installing Kibana:

https://www.elastic.co/guide/en/kibana/current/setup.html

Installing Search Guard

Search Guard is an Elasticsearch plugin, and can thus be installed by using the standard plugin installation procedure of Elasticsearch.

First, figure out which version of Search Guard you need for your Elasticsearch installation. Due to compatibility checks in Elasticsearch, the version of Search Guard must match the version of Elasticsearch exactly. So if you run Elasticsearch 5.4.0, you need to install Search Guard 5.4.0. Other versions will not work. If in doubt, you can always refer to our version matrix, which lists all available versions of Search Guard. In this tutorial, we use Elasticsearch and Kibana 5.4.0.

First, stop all running nodes, change to the installation directory of Elasticsearch and install Search Guard:

bin/elasticsearch-plugin install -b com.floragunn:search-guard-5:5.4.0-12

The plugin will be downloaded from Maven and installed automatically. Once the installation is complete, you will find a new folder plugins/search-guard-5 in your Elasticsearch installation directory.

Configuring TLS

Search Guard makes heavy use of TLS to encrypt and secure the complete traffic in your cluster. In order for TLS to work, you need to generate and install TLS certificates on all nodes. This makes sure that no one can sniff or tamper with your data, and that only trusted nodes can join the cluster.

There are several ways of generating the necessary certificates, but for this quickstart tutorial we will use the demo installation script that ships with Search Guard. For other ways of generating certificates, please refer to the section “Configuring Search Guard SSL” of the official Search Guard documentation.

In order to execute the demo installation script, change to the directory

<ES installation directory>/plugins/search-guard-5/tools

and execute the script

install_demo_configuration.sh

Depending on which platform you are on, you may need to set execution permissions first. The script will generate the TLS certificates and add the corresponding configuration settings to the elasticsearch.yml file. Once this is finished, you can start your nodes again.

Initializing Search Guard

After your nodes are up, you need to initialize Search Guard. In this tutorial, we will use the sample configuration files that ship with Search Guard. They already contain the correct users, roles and permissions for the Kibana integration.

The configuration files are located in this directory:

<ES installation directory>/plugins/search-guard-5/sgconfig

In order to upload this configuration to Search Guard, you can either use the sgadmin command line tool, or the REST management API. When using the sgadmin command line tool, you need to specify various parameters, such as the path to your certificates or the name of your cluster. You can read all about sgadmin in the official documentation.

For our quick start tutorial, we will use the demo sgadmin script, which already contains all required parameters. Change to:

<ES installation directory>/plugins/search-guard-5/tools

and execute the script

sgadmin_demo.sh

As with the installation script, you might need to set execution permissions first. After the script has been executed, Search Guard is initialized and your cluster is fully secured by TLS.

You can verify that everything works as expected by visiting the following URL with a browser:

https://localhost:9200/_searchguard/authinfo

Since we are using self-signed certificates, ignore the browser warning and accept the Root certificate that Search Guard is using. The warning message varies from browser to browser. Note that because TLS is now enabled on the REST layer, you have to use https when talking to Elasticsearch. Connecting with unsecured http will not work anymore.

When prompted for a username and password, you can use admin/admin. This user has full access to the cluster and all indices. You should see information about the currently logged in user in JSON format.

Congratulations, Search Guard is now installed and configured properly.

If you want to learn more about the structure and contents of the configuration files, refer to chapter Search Guard configuration of the documentation. After you make changes to the configuration, simply execute sgadmin_demo.sh again for the changes to take effect.

Adding Kibana to the mix

As with Elasticsearch, Kibana does not provide any security or session management out of the box. As the first step, we will configure Kibana so that it is able to talk to our Search Guard secured cluster. In the second step, we will install the Search Guard Kibana plugin which adds session management capabilities.

Configuring Kibana

For Kibana to work with a Search Guard secured cluster, we need to make some adjustments in the kibana.yml configuration file. First, change the default Elasticsearch URL from http to https:

elasticsearch.url: "https://localhost:9200"

And since we are using self-signed certificates in this tutorial, we need to tell Kibana to accept them explicitly by disabling certificate verification:

elasticsearch.ssl.verificationMode: none

This is of course not recommended for production. In production, you would rather install and configure your Root CA in Kibana, but that’s out of the scope of this tutorial.

Kibana is using a service user under the hood to manage the Kibana index, check the cluster health etc. This service user has to be present in Search Guard, and it needs to have the correct permissions to execute all required actions. The demo configuration already contains such a service user. The username and password of this user is “kibanaserver”. To tell Kibana to use these credentials when talking to Elasticsearch, add the following lines to kibana.yml:

elasticsearch.username: "kibanaserver"
elasticsearch.password: "kibanaserver"

These are the minimal changes you need to perform. You can now start Kibana again and check that the connection to Elasticsearch works correctly.

Installing the Kibana plugin

If you now try to access Kibana, you will see a Basic Authentication dialog popping up. This works, but we’d like to have a proper login dialogue and the possibility to logout again, without closing the browser. This is where the Kibana plugin comes into play.

As with Search Guard, you need to install the plugin that matches your Kibana version exactly. In our case, this is version 5.4.0-2. You can find all available versions on the GitHub release page:

https://github.com/floragunncom/search-guard-kibana-plugin/releases

Stop Kibana and download the zip file of the plugin to a directory on your machine:

https://github.com/floragunncom/search-guard-kibana-plugin/releases/download/v5.4.0/searchguard-kibana-5.4.0-2.zip

After that, install it like any other Kibana plugin: Change to the Kibana installation directory and execute:

bin/kibana-plugin install file:///path/to/searchguard-kibana-5.4.0-2.zip

This will install the plugin, and run the Kibana optimizer afterwards. This may take a couple of minutes. After that, restart Kibana and access it via

http://localhost:5601

You will be redirected to the login page and are prompted to provide username and password.

https://sematext.com/wp-content/uploads/2017/05/image1-237x300.png 237w" sizes="(max-width: 704px) 100vw, 704px" />Use the demo Kibana user to login, the username and password is “kibanaro”. This user is mapped to the role sg_kibana and has access to all indices.

That’s it. Both Elasticsearch and Kibana are secured with Search Guard.

Where to go next?

Deep-dive into Search Guard by reading the official documentation:

https://github.com/floragunncom/search-guard-docs

Learn more about integrating Kibana and Search Guard:

https://github.com/floragunncom/search-guard-docs/blob/master/kibana.md

Add Kibana multi tenancy to separate dashboards and visualizations

https://github.com/floragunncom/search-guard-docs/blob/master/multitenancy.md

Ask questions on the Search Guard Google Group:

https://groups.google.com/forum/#!forum/search-guard

Read about our fair, flexible and affordable enterprise license model:

https://floragunn.com/searchguard/searchguard-license-support/

Follow us on Twitter to stay up-to-date with new releases and features:

https://twitter.com/searchguard

 

Read the original blog entry...

More Stories By Sematext Blog

Sematext is a globally distributed organization that builds innovative Cloud and On Premises solutions for performance monitoring, alerting and anomaly detection (SPM), log management and analytics (Logsene), and search analytics (SSA). We also provide Search and Big Data consulting services and offer 24/7 production support for Solr and Elasticsearch.

Latest Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will d...
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Le...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
CI/CD is conceptually straightforward, yet often technically intricate to implement since it requires time and opportunities to develop intimate understanding on not only DevOps processes and operations, but likely product integrations with multiple platforms. This session intends to bridge the gap by offering an intense learning experience while witnessing the processes and operations to build from zero to a simple, yet functional CI/CD pipeline integrated with Jenkins, Github, Docker and Azure...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Dhiraj Sehgal works in Delphix's product and solution organization. His focus has been DevOps, DataOps, private cloud and datacenters customers, technologies and products. He has wealth of experience in cloud focused and virtualized technologies ranging from compute, networking to storage. He has spoken at Cloud Expo for last 3 years now in New York and Santa Clara.
Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their businesses to accelerate digital transformation success.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.