Welcome!

Blog Feed Post

How to ship Kibana Server Logs to Elasticsearch

When dealing with log centralization in your organization you have to start with something. Often times people start by collecting logs for the most crucial pieces of software, and frequently one chooses to ship them to their own in-house Elasticsearch-based solution (aka ELK stack) or one of the SaaS solutions available on the market, like our Logsene. What we regularly see in our logging consulting practice and with our Logsene users, is that it’s just a matter of time when everyone in the organization realizes how useful it is to have centralized logs and starts sending logs from every crucial software/IT — and business — component in the organization to the log centralization system.

Despite Kibana being frequently used for log analysis and reporting, Kibana is one of those pieces whose own logs are often left behind. Kibana is no longer a simple set of static Javascript files, not since version 4. It is a Node.js application and as such it produces its own logs, too. They can provide insight when something is not right with Kibana, so why put them in the same place as all the other logs? Let’s see how to do that.

For the rest of the post I’ll be using Kibana 5.1.1 along with Elasticsearch 5.1.1 and Filebeat 5.1.1.

Default Kibana Log Structure

So what do Kibana logs look like? With the default setup the logs look as follows:

  log   [20:53:02.732] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready

  log   [20:53:02.782] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch

  log   [20:53:02.801] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready

  log   [20:53:03.006] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready

  log   [20:53:03.010] [info][listening] Server running at http://localhost:5601

  log   [20:53:03.011] [info][status][ui settings] Status changed from uninitialized to yellow - Elasticsearch plugin is yellow

  log   [20:53:08.028] [info][status][plugin:[email protected]] Status changed from yellow to yellow - No existing Kibana index found

  log   [20:53:08.089] [info][status][plugin:[email protected]] Status changed from yellow to green - Kibana index ready

  log   [20:53:08.090] [info][status][ui settings] Status changed from yellow to green - Ready

They are in plain text format, so to send them to Elasticsearch we could use a pipeline similar to the following one:

How to ship cabana logs to elasticsearchhttps://sematext.com/wp-content/uploads/2017/02/Kibana-300x87.png 300w, https://sematext.com/wp-content/uploads/2017/02/Kibana-768x222.png 768w" sizes="(max-width: 975px) 100vw, 975px" />

With that approach we need Logstash in the middle to parse the data the plain text logs and give them structure. Keep in mind that Logstash has a heavy memory footprint and isn’t the fastest log shipper around. There are several lighter and faster Logstash alternatives to consider, depending on where you want your data to be parsed. For example, you could use a log shipper that is itself able to parse data, like Logagent or rsyslog. If we stick with Filebeat and change the Kibana logging format to JSON, we can throw away Logstash and simplify our pipeline:

How to ship cabana logs to elasticsearchhttps://sematext.com/wp-content/uploads/2017/02/kibana-2-300x87.png 300w, https://sematext.com/wp-content/uploads/2017/02/kibana-2-768x222.png 768w" sizes="(max-width: 975px) 100vw, 975px" />

Luckily, we can do a slight change in the Kibana configuration and not worry about non-JSON log files anymore.

Writing Kibana Logs as JSON to a File

You may have noticed that by default, the logs that are displayed at the standard output are in plain text format. What’s more, they are not saved to a file. This is not something that we like – we would like to have the logs saved into a file, so we can either parse it or send it directly to a destination of our choice.

To do that we need to uncomment the logging.dest property in config/kibana.yml configuration file and set the destination file for our logs. Let’s assume that we will put the logs in /var/log/kibana/kibana.log file, so our configuration for that should look as follows:

logging.dest: /var/log/kibana/kibana.log

Once the change is done and we start Kibana we will see that instead of writing to the console, we have the logs in the specified file. What’s more, the data that is in the log file is no longer in plain text format, but in JSON:

{"type":"log","@timestamp":"2017-01-13T21:46:07Z","tags":["status","plugin:[email protected]","info"],"pid":83295,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}

{"type":"log","@timestamp":"2017-01-13T21:46:07Z","tags":["status","plugin:[email protected]","info"],"pid":83295,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}

{"type":"log","@timestamp":"2017-01-13T21:46:08Z","tags":["status","plugin:[email protected]","info"],"pid":83295,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}

{"type":"log","@timestamp":"2017-01-13T21:46:08Z","tags":["status","plugin:[email protected]","info"],"pid":83295,"state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}

{"type":"log","@timestamp":"2017-01-13T21:46:08Z","tags":["status","plugin:[email protected]","info"],"pid":83295,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}

{"type":"log","@timestamp":"2017-01-13T21:46:08Z","tags":["listening","info"],"pid":83295,"message":"Server running at http://localhost:5601"}

{"type":"log","@timestamp":"2017-01-13T21:46:08Z","tags":["status","ui settings","info"],"pid":83295,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}

Way better for log shipping, compared to the console and plain text output, right? Well, not really, if you want to keep eyeballing these logs via a terminal, but if you have Kibana the chances are you want to inspect logs via Kibana. So now we have logs going to a file and in JSON format. There is nothing else left to do but send the logs to Elasticsearch.  

Sending JSON Formatted Kibana Logs to Elasticsearch

To send the logs that are already JSON structured and are in a file we just need Filebeat with appropriate configuration. We need to specify the input file and Elasticsearch output. For example, I’m using the following configuration that I stored in filebeat-json.yml file:

filebeat.prospectors:

  - input_type: log

  paths:

    - /var/log/*.log

output.elasticsearch:

  hosts: ["localhost:9200"]

We just take any file that ends with log extension in the /var/log/kibana/ directory (our directory for Kibana logs) and send them to Elasticsearch working locally. Once we run Filebeat using the following command we should see the data in Kibana:

./filebeat -c kibana-json.yml

If we now go to Kibana and use the filebeat-* index pattern we’ll see some data in the Discover tab of Kibana:

how to ship kibana logs to elasticsearchhttps://sematext.com/wp-content/uploads/2017/02/kibana-3-300x128.png 300w, https://sematext.com/wp-content/uploads/2017/02/kibana-3-768x327.png 768w" sizes="(max-width: 975px) 100vw, 975px" />

Sending Kibana Logs to Logsene

If you don’t want to host your own Elasticsearch instance you can send your Kibana logs to one of the SaaS services that understand Elasticsearch API, for example our Logsene. This is super simple. Just go create a free account if you don’t have it already and note your Logsene app token (you can find it here). We will also modify our Filebeat configuration slightly and use the following configuration:

filebeat.prospectors:

  - input_type: log

  paths:

    - /Users/gro/kibana/5.1.1/logs/*.log

output.elasticsearch:

  hosts: ["https://logsene-receiver.sematext.com:443"]

  protocol: https

  index: "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"

  template.enabled: false

The key point in the above configuration is the output configuration. We point Filebeat to https://logsene-receiver.sematext.com:443 and use protocol property set to httpse because we want to use HTTPS so that no one can sniff our traffic and see our logs. We also specify the index property, which should be set to the token of your Logsene app. Finally, we disable template sending by setting the template.enabled property to false. After starting Filebeat you will see the data in Logsene:

how to ship cabana logs to elasticsearchhttps://sematext.com/wp-content/uploads/2017/02/kibana-4-300x139.png 300w, https://sematext.com/wp-content/uploads/2017/02/kibana-4-768x357.png 768w" sizes="(max-width: 975px) 100vw, 975px" />

Filebeat Alternative

Of course, Filebeat is not the only option for sending Kibana logs to Logsene or your own Elasticsearch. For example,  you could also use Logagent, an open source, lightweight log shipper. Doing that is very, very simple, even simpler than with Filebeat. We can just run the following command and our logs will be delivered to the Logsene system identified by the token that we provide:

cat kibana.log | logagent -i aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee

You can also configure Logagent to work as a service.

As you can see, shipping Kibana logs whether they are structured or unstructured is fairly simple. However, the process could be even simpler!  Typically, the most complex part of an ELK stack is the “E” — Elasticsearch. Thus, if you don’t feel like dealing with securing Elasticsearch, Elasticsearch tuning, scaling, and other forms of maintenance you may want to consider ELK as a service, such as our Logsene. Why? By using Logsene you’ll get a secure, fully managed log management infrastructure with Elasticsearch API and built-in Kibana — without having to investing in and dealing with the infrastructure or becoming an Elasticsearch expert.

Moreover, with Sematext Cloud you can correlate your logs and metrics with a single tool enabling you to identify, diagnose, and fix issues in your environment without context-switching between multiple tools.

SIGN UP – FREE TRIAL

 

Read the original blog entry...

More Stories By Sematext Blog

Sematext is a globally distributed organization that builds innovative Cloud and On Premises solutions for performance monitoring, alerting and anomaly detection (SPM), log management and analytics (Logsene), and search analytics (SSA). We also provide Search and Big Data consulting services and offer 24/7 production support for Solr and Elasticsearch.

Latest Stories
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
We build IoT infrastructure products - when you have to integrate different devices, different systems and cloud you have to build an application to do that but we eliminate the need to build an application. Our products can integrate any device, any system, any cloud regardless of protocol," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
SYS-CON Events announced today that SourceForge has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. SourceForge is the largest, most trusted destination for Open Source Software development, collaboration, discovery and download on the web serving over 32 million viewers, 150 million downloads and over 460,000 active development projects each and every month.
Multiple data types are pouring into IoT deployments. Data is coming in small packages as well as enormous files and data streams of many sizes. Widespread use of mobile devices adds to the total. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists looked at the tools and environments that are being put to use in IoT deployments, as well as the team skills a modern enterprise IT shop needs to keep things running, get a handle on all this data, and deliver...
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"Tintri focuses on the Ops side of the DevOps, which basically is pushing more and more of the accessibility of the infrastructure to the developers and trying to get behind the scenes," explained Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Cloud applications are seeing a deluge of requests to support the exploding advanced analytics market. “Open analytics” is the emerging strategy to deliver that data through an open data access layer, in the cloud, to be directly consumed by external analytics tools and popular programming languages. An increasing number of data engineers and data scientists use a variety of platforms and advanced analytics languages such as SAS, R, Python and Java, as well as frameworks such as Hadoop and Spark...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cl...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
SYS-CON Events announced today that Enzu will exhibit at SYS-CON's 21st Int\ernational Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive advantage. By offering a suite of proven hosting and management services, Enzu wants companies to focus on the core of their ...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
In his session at @ThingsExpo, Eric Lachapelle, CEO of the Professional Evaluation and Certification Board (PECB), provided an overview of various initiatives to certify the security of connected devices and future trends in ensuring public trust of IoT. Eric Lachapelle is the Chief Executive Officer of the Professional Evaluation and Certification Board (PECB), an international certification body. His role is to help companies and individuals to achieve professional, accredited and worldwide re...