Welcome!

Blog Feed Post

Effects Of Linux IO Scheduler On SSD Performance

This blog begins a series of blogs which present test results of testing performed at VeloBit to investigate the effects of different Linux IO scheduler algorithms on IO bandwidth when SSDs are operated in a simulated enterprise environment. This series of blogs is similar to the series we previously presented documenting IO bandwidth vs. read/write ratio. This first blog will document the test set up and procedures and present the results from one tested device. In following weeks, I will refer the reader back to this blog for test set up and procedure information and simply present the results for another device. After presenting results of a particular week, I will make observations as appropriate.

Test Motivation and Procedure
For this group of tests, we wanted to focus on the effect of the selected Linux IO scheduler on the IO bandwidth performance of the SSD. An IO scheduler controls the way the Linux kernel commits reads and writes to disk (or SSD). There are several different schedulers supported by Linux; each one having particular advantages with respect to particular IO workload. The Linux kernel supports 4 different IO schedulers:
  • No-op scheduler (NOOP)
  • Complete fair queueing scheduler (cfq)
  • Deadline scheduler
  • Anticipatory scheduler

However, these schedulers are designed for HDD.  For SSD installations, an IO scheduler may not be required since each SSD has its own specific scheduler inside the SSD controller.

We tested 5 different SSDs from 4 different manufactures. Some are SATA devices and some are PCI-E devices. We used a Linux based system running with the Intel Open Storage Toolkit, a tool similar to Iometer that runs on Linux. The Open Storage Toolkit generates synthesized workloads with various sizes and queue lengths and also provides monitoring and IO tracing capabilities. We also used another set of tools available in Linux, blktrace and blkparse that intercept all block-level requests. The idea was to observe IO bandwidth under operating conditions.

So for this test, we ran identical test suite for each IO scheduler algorithm listed above and observed IO bandwidth. We used 6 different IO request sizes for each scheduler: 1 KB, 4 KB, 16 KB, 64 KB, 128 KB and 256 KB. We ran 4 different sets of these tests based on IO type: random read, sequential read, random write and sequential write.

General Observations
Overall, for the 5 devices tested, we found that none are very sensitive to different IO scheduler algorithms and no consistent results were observed across all devices. However, there are some interesting observations to be made. The NOOP scheduler is generally believed to be the best used with SSDs because SSDs do not depend on mechanical movement to access data. Such non-mechanical devices do not require re-ordering of multiple I/O requests (grouping together I/O requests that are physically close together on the disk), thereby reducing average seek time and the variability of I/O service time. These test results show that the NOOP scheduler is not necessarily the best for all devices.


Test SSD: 250 GB OCZ Z Drive PCI-E based SSD

Figure 1: IO Scheduler Bandwidth for OCZ Z Drive: a) random read b) sequential read


Figure 2: IO Scheduler Bandwidth for OCZ Z Drive: a) random write b) sequential write

Figures 1 and 2 shows the results of OCZ Z Drive. Relative to the other schedulers, the anticipatory scheduler performs poorly for the random read workload.  The reason is that this scheduler tries to avoid disk seek operations by waiting a very short time for another read that is physically located near the current read request. This introduces some overhead for SSD because even a short waiting time for SSD is not necessary. The same anticipatory scheduler does not affect write performance because it only executes the “wait” during read requests. For sequential read, the anticipatory scheduler also does not have any negative effect because the next read request it is waiting for is physically close to the current read request, so its performance is similar to the other three schedulers.

Also note that for random workloads (both read and write), the NOOP scheduler performs the worst when the request size is big (128 KB and 256 KB). This is because NOOP puts all requests into a simple FIFO queue, which means that requests must be processed in order and new requests cannot be processed unit one is completed.  This is different than schedulers which are multi-queue (complete fair queueing , deadline or anticipatory).  These schedulers can process requests in parallel.

Come back next week for results and observations on another SSD.

Read the original blog entry...

More Stories By Peter Velikin

Peter Velikin has 12 years of experience creating new markets and commercializing products in multiple high tech industries. Prior to VeloBit, he was VP Marketing at Zmags, a SaaS-based digital content platform for e-commerce and mobile devices, where he managed all aspects of marketing, product management, and business development. Prior to that, Peter was Director of Product and Market Strategy at PTC, responsible for PTC’s publishing, content management, and services solutions. Prior to PTC, Peter was at EMC Corporation, where he held roles in product management, business development, and engineering program management.

Peter has an MS in Electrical Engineering from Boston University and an MBA from Harvard Business School.

Latest Stories
Cloud promises the agility required by today’s digital businesses. As organizations adopt cloud based infrastructures and services, their IT resources become increasingly dynamic and hybrid in nature. Managing these require modern IT operations and tools. In his session at 20th Cloud Expo, Raj Sundaram, Senior Principal Product Manager at CA Technologies, will discuss how to modernize your IT operations in order to proactively manage your hybrid cloud and IT environments. He will be sharing bes...
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists loo...
Cloud applications are seeing a deluge of requests to support the exploding advanced analytics market. “Open analytics” is the emerging strategy to deliver that data through an open data access layer, in the cloud, to be directly consumed by external analytics tools and popular programming languages. An increasing number of data engineers and data scientists use a variety of platforms and advanced analytics languages such as SAS, R, Python and Java, as well as frameworks such as Hadoop and Spark...
Automation is enabling enterprises to design, deploy, and manage more complex, hybrid cloud environments. Yet the people who manage these environments must be trained in and understanding these environments better than ever before. A new era of analytics and cognitive computing is adding intelligence, but also more complexity, to these cloud environments. How smart is your cloud? How smart should it be? In this power panel at 20th Cloud Expo, moderated by Conference Chair Roger Strukhoff, paneli...
"Loom is applying artificial intelligence and machine learning into the entire log analysis process, from start to finish and at the end you will get a human touch,” explained Sabo Taylor Diab, Vice President, Marketing at Loom Systems, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, highlighted the current challenges of these transformative technologies and shared strategies for preparing your organization for these changes. This “view from the top” outlined the latest trends and developments i...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
"We are a monitoring company. We work with Salesforce, BBC, and quite a few other big logos. We basically provide monitoring for them, structure for their cloud services and we fit into the DevOps world" explained David Gildeh, Co-founder and CEO of Outlyer, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Join us at Cloud Expo June 6-8 to find out how to securely connect your cloud app to any cloud or on-premises data source – without complex firewall changes. More users are demanding access to on-premises data from their cloud applications. It’s no longer a “nice-to-have” but an important differentiator that drives competitive advantages. It’s the new “must have” in the hybrid era. Users want capabilities that give them a unified view of the data to get closer to customers and grow business. The...
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Artificial intelligence, machine learning, neural networks. We’re in the midst of a wave of excitement around AI such as hasn’t been seen for a few decades. But those previous periods of inflated expectations led to troughs of disappointment. Will this time be different? Most likely. Applications of AI such as predictive analytics are already decreasing costs and improving reliability of industrial machinery. Furthermore, the funding and research going into AI now comes from a wide range of com...
"When we talk about cloud without compromise what we're talking about is that when people think about 'I need the flexibility of the cloud' - it's the ability to create applications and run them in a cloud environment that's far more flexible,” explained Matthew Finnie, CTO of Interoute, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Internet of @ThingsExpo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devic...