Welcome!

Blog Feed Post

Node.Js server monitoring, part 2

node-js-monitorLast time we mentioned two fundamental principles while monitoring any object:

1. The monitor should collect as much important information as possible that will allow to accurately evaluate the health state of an object.
2. The monitor should have little to no effect on the activity of the object.

Sure, these two principles work against each other in most of cases, but with Node.js they work together quite nicely because Node.js is based on event-driven technology and doesn’t use the traditional threads-driven approach. This technology allows to register many listeners for one event and process them in parallel almost independently. To avoid even a small effect on the production server, it was decided to separate the monitor into two parts – the first is the javascript module-plugin that listens to all server events and accumulates necessary information and the second is the Linux shell script that periodically runs the monitor-plugin by using the REST technique for collecting, processing, and sending information to the Monitis main server.

Normally, it is necessary to add couple lines in your existing Node.js server code to activate the monitor-plugin:

var monitor = require('monitor');// insert monitor module-plugin
....

var server = … // the definition of current Node.js server

….

monitor.Monitor(server); //add server to monitor

Now the monitor will begin collecting and measuring data. The monitor-plugin has an embedded simple HTTP server that sends accumulated data by request and should correspond (in current implementation) to the following pattern

http://127.0.0.1:10010/node_monitor?action=getdata&access_code=

where
10010 – the listen port of monitor-plugin (configurable)
‘node_monitor’ – the pathname keyword
‘action=getdata’ – command for getting collected data
‘access_code’ – the specially generated access code that changes for every session

Please notice that monitis-plugin server (in current implementation) listens the localhost only. This and usage of the security access code for every session gives enough security while monitoring. More detailed information can be found along with implemented code in the github repository.

Server monitoring metrics

There are a largely standard set of metrics which can be used to monitor the underlying health of any server.

* CPU Usage describes the level of utilisation of the system CPU(s) and is usually broken down into three states.
o IO Wait – indicates the proportion of CPU cycles spent waiting for IO (disk or network) events. If you experience large IO Wait proportions, it can indicate that your disks are causing a performance bottleneck.
o System – indicates the proportion of CPU cycles spent performing kernel-level processing. Generally you will find only a small proportion of your CPU cycles are spent on system tasks, Hence if you see spikes it could indicate a problem.
o User – indicates the proportion of CPU cycles spent performing user instigated processing. This is where you should see the bulk of your CPU cycles consumed; it includes activities such as web serving, application execution, and every other process not owned by the kernel.
o Idle – indicates the spare CPU capacity you have – all the cycles where the CPU is, quite literally, doing nothing.
* Load Average is a metric that indicates the level of load that a server is under at a given point in time. Usually evaluated as number of requests per second.
* RAM usage by server is broken down usually into the following parts.
o Free – the amount of unallocated RAM available. Linux systems tend to keep this as low as possible; and do not free up the system’s physical RAM until it is requested by another process.
o Inactive – RAM that is in-use for buffers and page caching, but hasn’t been used recently so will be reclaimed first for use by a running process.
o Active – RAM that has been used recently and will not be reclaimed unless we have insufficient Inactive RAM to claim from. In Linux systems this is generally the one to keep an eye on. Sudden, rapid increases signal a memory hungry process that will soon cause VM swapping to occur.
* The server uptime is a metric showing the elapsed time since the last reboot. Non-linear behavior of the server uptime line indicates that the server was rebooted somehow.
* Throughput – the amount of data traffic passing through the servers’ network interface is fundamentally important. It is usually broken down into inbound and outbound throughput and normally measured as average values for some period by kbit or kbyte per sec.
* Server Response time is defined as the duration from receiving a request to sending a response. Normally, it should not exceed reasonable timeout (usually this depends on the complexity of processing) but should be as little as possible. Usually, the average and peak response time is evaluated for some time period.
* The count of successfully processed requests is evaluated as the percentage of responses with 2xx status codes (success) to requests during observing time. This value should be as close as possible to 100%.

We have used part of these metrics and added some specific statistics that are important for our task (e.g. client platform, detailed info for response codes, etc.)

Test results

The results below were obtained on a Node.js server equipped with the monitor described above on a Debian6-x64. The server listens on HTTP (81) and HTTPS (443) ports and does not have a large load.


By double-clicking on a line you can view a specific part of the monitoring data.

The data can be shown in graphical view too.



In conclusion, the monitoring system has successfully tracked the metrics and found the Node.Js server to be in a good health state.

Share Now:del.icio.usDiggFacebookLinkedInBlinkListDZoneGoogle BookmarksRedditStumbleUponTwitterRSS

Read the original blog entry...

More Stories By Hovhannes Avoyan

Hovhannes Avoyan is the CEO of PicsArt, Inc.,

Latest Stories
HyperConvergence came to market with the objective of being simple, flexible and to help drive down operating expenses. It reduced the footprint by bundling the compute/storage/network into one box. This brought a new set of challenges as the HyperConverged vendors are very focused on their own proprietary building blocks. If you want to scale in a certain way, let's say you identified a need for more storage and want to add a device that is not sold by the HyperConverged vendor, forget about it...
One of the biggest challenges with adopting a DevOps mentality is: new applications are easily adapted to cloud-native, microservice-based, or containerized architectures - they can be built for them - but old applications need complex refactoring. On the other hand, these new technologies can require relearning or adapting new, oftentimes more complex, methodologies and tools to be ready for production. In his general session at @DevOpsSummit at 20th Cloud Expo, Chris Brown, Solutions Marketi...
At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Historically, some banking activities such as trading have been relying heavily on analytics and cutting edge algorithmic tools. The coming of age of powerful data analytics solutions combined with the development of intelligent algorithms have created new opportunities for financial institutions. In his session at 20th Cloud Expo, Sebastien Meunier, Head of Digital for North America at Chappuis Halder & Co., discussed how these tools can be leveraged to develop a lasting competitive advantage ...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced you to the challenges, solutions and benefits of training AI systems to solve visual problems with an emphasis on improving AIs with continuous training in the field. He explored applications in several industries and discussed technologies that allow the deployment of advanced visualization solutions to the cloud.
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: implemen...
From 2013, NTT Communications has been providing cPaaS service, SkyWay. Its customer’s expectations for leveraging WebRTC technology are not only typical real-time communication use cases such as Web conference, remote education, but also IoT use cases such as remote camera monitoring, smart-glass, and robotic. Because of this, NTT Communications has numerous IoT business use-cases that its customers are developing on top of PaaS. WebRTC will lead IoT businesses to be more innovative and address...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
In this presentation, you will learn first hand what works and what doesn't while architecting and deploying OpenStack. Some of the topics will include:- best practices for creating repeatable deployments of OpenStack- multi-site considerations- how to customize OpenStack to integrate with your existing systems and security best practices.
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
Evan Kirstel is an internationally recognized thought leader and social media influencer in IoT (#1 in 2017), Cloud, Data Security (2016), Health Tech (#9 in 2017), Digital Health (#6 in 2016), B2B Marketing (#5 in 2015), AI, Smart Home, Digital (2017), IIoT (#1 in 2017) and Telecom/Wireless/5G. His connections are a "Who's Who" in these technologies, He is in the top 10 most mentioned/re-tweeted by CMOs and CIOs (2016) and have been recently named 5th most influential B2B marketeer in the US. H...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"Venafi has a platform that allows you to manage, centralize and automate the complete life cycle of keys and certificates within the organization," explained Gina Osmond, Sr. Field Marketing Manager at Venafi, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.