Welcome!

Blog Feed Post

How to ship Kibana Server Logs to Elasticsearch

When dealing with log centralization in your organization you have to start with something. Often times people start by collecting logs for the most crucial pieces of software, and frequently one chooses to ship them to their own in-house Elasticsearch-based solution (aka ELK stack) or one of the SaaS solutions available on the market, like our Logsene. What we regularly see in our logging consulting practice and with our Logsene users, is that it’s just a matter of time when everyone in the organization realizes how useful it is to have centralized logs and starts sending logs from every crucial software/IT — and business — component in the organization to the log centralization system.

Despite Kibana being frequently used for log analysis and reporting, Kibana is one of those pieces whose own logs are often left behind. Kibana is no longer a simple set of static Javascript files, not since version 4. It is a Node.js application and as such it produces its own logs, too. They can provide insight when something is not right with Kibana, so why put them in the same place as all the other logs? Let’s see how to do that.

For the rest of the post I’ll be using Kibana 5.1.1 along with Elasticsearch 5.1.1 and Filebeat 5.1.1.

Default Kibana Log Structure

So what do Kibana logs look like? With the default setup the logs look as follows:

  log   [20:53:02.732] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready

  log   [20:53:02.782] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch

  log   [20:53:02.801] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready

  log   [20:53:03.006] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready

  log   [20:53:03.010] [info][listening] Server running at http://localhost:5601

  log   [20:53:03.011] [info][status][ui settings] Status changed from uninitialized to yellow - Elasticsearch plugin is yellow

  log   [20:53:08.028] [info][status][plugin:[email protected]] Status changed from yellow to yellow - No existing Kibana index found

  log   [20:53:08.089] [info][status][plugin:[email protected]] Status changed from yellow to green - Kibana index ready

  log   [20:53:08.090] [info][status][ui settings] Status changed from yellow to green - Ready

They are in plain text format, so to send them to Elasticsearch we could use a pipeline similar to the following one:

How to ship cabana logs to elasticsearchhttps://sematext.com/wp-content/uploads/2017/02/Kibana-300x87.png 300w, https://sematext.com/wp-content/uploads/2017/02/Kibana-768x222.png 768w" sizes="(max-width: 975px) 100vw, 975px" />

With that approach we need Logstash in the middle to parse the data the plain text logs and give them structure. Keep in mind that Logstash has a heavy memory footprint and isn’t the fastest log shipper around. There are several lighter and faster Logstash alternatives to consider, depending on where you want your data to be parsed. For example, you could use a log shipper that is itself able to parse data, like Logagent or rsyslog. If we stick with Filebeat and change the Kibana logging format to JSON, we can throw away Logstash and simplify our pipeline:

How to ship cabana logs to elasticsearchhttps://sematext.com/wp-content/uploads/2017/02/kibana-2-300x87.png 300w, https://sematext.com/wp-content/uploads/2017/02/kibana-2-768x222.png 768w" sizes="(max-width: 975px) 100vw, 975px" />

Luckily, we can do a slight change in the Kibana configuration and not worry about non-JSON log files anymore.

Writing Kibana Logs as JSON to a File

You may have noticed that by default, the logs that are displayed at the standard output are in plain text format. What’s more, they are not saved to a file. This is not something that we like – we would like to have the logs saved into a file, so we can either parse it or send it directly to a destination of our choice.

To do that we need to uncomment the logging.dest property in config/kibana.yml configuration file and set the destination file for our logs. Let’s assume that we will put the logs in /var/log/kibana/kibana.log file, so our configuration for that should look as follows:

logging.dest: /var/log/kibana/kibana.log

Once the change is done and we start Kibana we will see that instead of writing to the console, we have the logs in the specified file. What’s more, the data that is in the log file is no longer in plain text format, but in JSON:

{"type":"log","@timestamp":"2017-01-13T21:46:07Z","tags":["status","plugin:[email protected]","info"],"pid":83295,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}

{"type":"log","@timestamp":"2017-01-13T21:46:07Z","tags":["status","plugin:[email protected]","info"],"pid":83295,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}

{"type":"log","@timestamp":"2017-01-13T21:46:08Z","tags":["status","plugin:[email protected]","info"],"pid":83295,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}

{"type":"log","@timestamp":"2017-01-13T21:46:08Z","tags":["status","plugin:[email protected]","info"],"pid":83295,"state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}

{"type":"log","@timestamp":"2017-01-13T21:46:08Z","tags":["status","plugin:[email protected]","info"],"pid":83295,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}

{"type":"log","@timestamp":"2017-01-13T21:46:08Z","tags":["listening","info"],"pid":83295,"message":"Server running at http://localhost:5601"}

{"type":"log","@timestamp":"2017-01-13T21:46:08Z","tags":["status","ui settings","info"],"pid":83295,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}

Way better for log shipping, compared to the console and plain text output, right? Well, not really, if you want to keep eyeballing these logs via a terminal, but if you have Kibana the chances are you want to inspect logs via Kibana. So now we have logs going to a file and in JSON format. There is nothing else left to do but send the logs to Elasticsearch.  

Sending JSON Formatted Kibana Logs to Elasticsearch

To send the logs that are already JSON structured and are in a file we just need Filebeat with appropriate configuration. We need to specify the input file and Elasticsearch output. For example, I’m using the following configuration that I stored in filebeat-json.yml file:

filebeat.prospectors:

  - input_type: log

  paths:

    - /var/log/*.log

output.elasticsearch:

  hosts: ["localhost:9200"]

We just take any file that ends with log extension in the /var/log/kibana/ directory (our directory for Kibana logs) and send them to Elasticsearch working locally. Once we run Filebeat using the following command we should see the data in Kibana:

./filebeat -c kibana-json.yml

If we now go to Kibana and use the filebeat-* index pattern we’ll see some data in the Discover tab of Kibana:

how to ship kibana logs to elasticsearchhttps://sematext.com/wp-content/uploads/2017/02/kibana-3-300x128.png 300w, https://sematext.com/wp-content/uploads/2017/02/kibana-3-768x327.png 768w" sizes="(max-width: 975px) 100vw, 975px" />

Sending Kibana Logs to Logsene

If you don’t want to host your own Elasticsearch instance you can send your Kibana logs to one of the SaaS services that understand Elasticsearch API, for example our Logsene. This is super simple. Just go create a free account if you don’t have it already and note your Logsene app token (you can find it here). We will also modify our Filebeat configuration slightly and use the following configuration:

filebeat.prospectors:

  - input_type: log

  paths:

    - /Users/gro/kibana/5.1.1/logs/*.log

output.elasticsearch:

  hosts: ["https://logsene-receiver.sematext.com:443"]

  protocol: https

  index: "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"

  template.enabled: false

The key point in the above configuration is the output configuration. We point Filebeat to https://logsene-receiver.sematext.com:443 and use protocol property set to httpse because we want to use HTTPS so that no one can sniff our traffic and see our logs. We also specify the index property, which should be set to the token of your Logsene app. Finally, we disable template sending by setting the template.enabled property to false. After starting Filebeat you will see the data in Logsene:

how to ship cabana logs to elasticsearchhttps://sematext.com/wp-content/uploads/2017/02/kibana-4-300x139.png 300w, https://sematext.com/wp-content/uploads/2017/02/kibana-4-768x357.png 768w" sizes="(max-width: 975px) 100vw, 975px" />

Filebeat Alternative

Of course, Filebeat is not the only option for sending Kibana logs to Logsene or your own Elasticsearch. For example,  you could also use Logagent, an open source, lightweight log shipper. Doing that is very, very simple, even simpler than with Filebeat. We can just run the following command and our logs will be delivered to the Logsene system identified by the token that we provide:

cat kibana.log | logagent -i aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee

You can also configure Logagent to work as a service.

As you can see, shipping Kibana logs whether they are structured or unstructured is fairly simple. However, the process could be even simpler!  Typically, the most complex part of an ELK stack is the “E” — Elasticsearch. Thus, if you don’t feel like dealing with securing Elasticsearch, Elasticsearch tuning, scaling, and other forms of maintenance you may want to consider ELK as a service, such as our Logsene. Why? By using Logsene you’ll get a secure, fully managed log management infrastructure with Elasticsearch API and built-in Kibana — without having to investing in and dealing with the infrastructure or becoming an Elasticsearch expert.

Moreover, with Sematext Cloud you can correlate your logs and metrics with a single tool enabling you to identify, diagnose, and fix issues in your environment without context-switching between multiple tools.

SIGN UP – FREE TRIAL

 

Read the original blog entry...

More Stories By Sematext Blog

Sematext is a globally distributed organization that builds innovative Cloud and On Premises solutions for performance monitoring, alerting and anomaly detection (SPM), log management and analytics (Logsene), and search analytics (SSA). We also provide Search and Big Data consulting services and offer 24/7 production support for Solr and Elasticsearch.

Latest Stories
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: imple...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching ...