Click here to close now.




















Welcome!

Related Topics: Linux Containers

Linux Containers: Article

Scaling Linux to the Extreme

Superior performance and stability in all environments

Previous notions of limited scalability of Linux were abruptly changed last year by the introduction of the SGI Altix server, which scaled up to 64 processors within a single system image (SSI). Today, large-scale Linux servers with hundreds of processors are being deployed by a variety of businesses, universities, research centers, and governments around the world. NASA Ames Research Center, for example, continues to push the limits even further with their 512-processor system running a single instance of the Linux kernel.

This article examines the challenges in enabling large numbers of processors to work efficiently together to better support Linux system configurations for High- Performance Computing (HPC) environments. We will explain what scaling is, the importance of good hardware design, and the kernel changes that make scaling Linux on systems up to 256 processors and beyond possible. Finally, we will show examples of how these highly scalable Linux systems are being used to solve complex real-world problems more efficiently.

Scaling Within HPC Environments

First, let's examine the issues behind system scalability. The term scaling refers to the ability to add more hardware resources, such as processors or memory, to improve the capacity and performance of a system. There are different strategies used for scaling systems depending on the workload requirements. Enterprise business server workloads, for example, often consist of many individual, unrelated tasks that are typically deployed on systems that are smaller in nature and networked together. HPC workloads, on the other hand, are composed of scientific programs that require a high degree of complex processing, process large amounts of data, and have widely fluctuating resource requirements. Because of their demanding resource requirements, HPC programs are written and parallelized to break complex problems down to enable them to leverage system resources in parallel.

One approach used to solve HPC problems is horizontal scaling. With this approach, a program's threads run across a "cluster" of separate systems, and these threads communicate and exchange data over the network. This strategy can be used for workloads that are embarrassingly parallel, where little communication is required between program threads as they perform their computations. However, when program threads need to interact while working on a common set of data, vertical scaling provides a more efficient and better approach. With vertical scaling, threads run on a large number of CPUs all within one system, enabling processors to communicate more efficiently and to also operate upon and exchange data using global shared memory. Adding more processors to the system enables more threads to run simultaneously, thereby enabling more resources to be applied and shared to solve a problem. Vertical scaling also provides an ideal environment for using an HPC system as a central server to dynamically run different HPC programs at the same time when any one program either doesn't actually need all of the system processors or has its own scaling limitations. Whether greater processing capability for a single HPC program is required, or increasing throughput for several different HPC programs running at once, a properly designed vertically scaled system provides a flexible and superior environment for both the most demanding and the widest range of HPC applications.

Hardware Design and Scalability

Perfect scaling occurs when the number of processors added improves the workload throughput by the same factor. For instance, a four-processor system should theoretically improve processing power fourfold compared to a single processor system. In a multiprocessor system, it is critical to minimize the overhead involved with coordinating among multiple processors and utilizing shared resources. We say, "the system is scaling linearly at 90 percent up to 4 processors" if adding a second processor improves system performance by 1.8X, adding a third processor yields a 2.7X improvement, and adding a fourth processor yields an improvement of 3.6X over a single CPU. As more processors are added to a system, often a point is reached where performance no longer improves or even decreases due to hardware, kernel, or application software limitations. The goal is to improve performance by enabling multiple CPUs to scale as close to perfect as possible, and to the highest possible numbers of CPUs.

One of the keys to obtaining maximum performance is a fast system bus with high bandwidth. The extreme processing power provided by hundreds of high-performance CPUs requires multiple fast paths for handling data between CPUs, caches, memory, and I/O. The system bus found on symmetric multiprocessing systems can quickly become a bottleneck since all traffic from the CPUs uses a single, common bus to access and transfer data. Much higher system performance is available using a non-uniform memory access (NUMA) architecture since CPU accesses to memory within the same node will distribute and reduce the load on the system interconnect (see Figure 1).

A well-designed NUMA system will carefully account for the CPU bus transfer speeds, number of CPUs on any given bus, memory transfer speeds, multiple paths, and other factors to ensure that maximum overall bandwidth can be delivered throughout the system. Drawing an imaginary line through the middle of a system to examine its maximum capacity for transferring data between two halves is called bisectional bandwidth. Figure 2 shows the system bus interconnect for an SGI Altix system designed for overall maximum bisectional bandwidth and performance. In this diagram, each C-brick is a rack-mountable module containing four CPUs and each R-brick is an SGI NUMAlink module used to connect together and make a 128p SGI Altix system.

A computer architecture that is well balanced and built for maximum performance is essential to achieving good system scalability. If the hardware doesn't scale, neither will the Linux kernel or the user's application.

Linux Kernel Scalability

Linux was originally designed for smaller systems. Extending Linux to scale well on large systems involves extending various sizes and tables managed by the kernel, and then optimizing the performance for high-end technical computing. Thanks to the solid design and wide community support, Linux has adapted well to large systems.

SGI kernel engineers found that while they were clearly the first to run Linux on large system configurations of this kind, the Linux community had already done an excellent job reworking and addressing many of the issues related to Linux scalability. The types of changes made by SGI and others within the community include extending resource counters sizes, extending bit-mask sizes, and fixing commands and tools to support more than double-digit CPU numbers. Other changes included adding NUMA tool commands to help manage larger memory sizes more efficiently, increasing the limit on open file descriptors and on file sizes, and reducing boot time console messages generated by each processor, since administrating and troubleshooting would otherwise be unmanageable on systems with large CPU counts.

Once the kernel was modified to accommodate the resources of a larger system, SGI engineers focused on getting Linux to scale and perform well. One way to find scaling problems for a 256-processor system is to turn up the stress knobs while using a much larger configuration, such as a 512-processor system. Problems that otherwise would be difficult to pinpoint become obvious. Developing and testing on these larger configurations enabled the SGI engineering team to find and fix many problems that affect all multiprocessor systems of all sizes. SGI kernel engineers used several large configurations in this manner to run a variety of different HPC applications, benchmarks, and custom tests to identify and diagnose Linux scaling problems. Figure 3 shows an early 512 processor SGI Altix system, ascender, which was used by SGI kernel engineers to find and fix scaling problems.

Such testing uncovered a number of areas to change for improving scalability. For example, some system-wide kernel variables were converted to per-processor variables. This reduces memory contention on shared data such as global kernel performance statistics, since this data could be maintained separately, then combined only when needed for reporting purposes. Other scaling improvements included finding and eliminating high-contention spinlocks, reducing spinlock contention in timer routines, optimizing process scheduling algorithms, changes in the buffer cache to use per-node data structures, improved translation lookaside buffer algorithms, improved parallelism of page fault and out-of-memory handling, and identifying and removing hot cache lines due to false sharing.

Bringing It All Together

A well-designed hardware system combined with the Linux optimizations described here enables hundreds of processors within a system to access, use, and manipulate shared resources in the most efficient manner possible, enabling users' HPC programs to fully exploit the available system resources to do real work. The following three examples demonstrate the dramatic scaling and performance improvements being achieved with Linux on systems with processor counts of 128, 256, and larger.

The first example (see Figure 4) shows how adding processors to a system can dramatically reduce the elapsed time for the bioinformatics HPC application HTC-BLAST (High Throughput Computing - Basic Logical Alignment Search Tool) to process 10,000 queries with 4,111,677 total letters on a human genome database with 545 sequences and 2,866,452,029 total letters. In particular, notice that a system with 128 processors ran 1.77X faster than a system with 64 processors.

The next example (see Figure 5) shows the scaling and performance improvements achieved using a computation fluid dynamics application on an automobile external flow problem with a model size of 100 million cells. In this case the total elapsed time continues to decrease as the system configuration is extended from 64 to 256 processors.

Finally, the third example (see Figure 6) shows scaling results for an OpenMP code called Cart3D, developed and used extensively by the NASA Ames Research Center to study flows for the space shuttle. NASA Ames Research Center, known for pushing the limits of computing in pursuit of fundamental science, achieved almost 90% scaling efficiency while running this HPC code on a 512-processor SGI Altix system. SGI and NASA engineers collaborated to identify and fix many Linux scaling issues to achieve a dramatic new breakthrough on system scalability with Linux. The NASA Ames Research Center's system used for this work is shown in Figure 7.

Summary

The performance and capabilities of Linux for server environments have improved dramatically in just the last year. Scientists and others are now routinely using single-system Linux configurations with hundreds of processors to solve complex problems faster and with greater ease than had been thought possible. Testing and developing on these large configurations have proven invaluable for improving the reliability and performance of Linux on configurations of all sizes. The synergy of these scaling improvements combined with the open development model has enabled the continued advancement of Linux to become the superior operating system choice for delivering performance and stability in all environments.

More Stories By Steve Neuner

Steve Neuner is the engineering director for Linux at SGI and has been working on Linux for the past 5 years. He's been developing operating system software for system hardware manufacturers for the past 20 years.

More Stories By Dan Higgins

Dan Higgins has worked in the computer industry for 26 years in a variety of technical roles. Dan has been with SGI for the past 17 years and currently manages the Linux kernel scalability and RAS (Reliability, Availability and Serviceability) engineering team.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
Everyone talks about continuous integration and continuous delivery but those are just two ends of the pipeline. In the middle of DevOps is continuous testing (CT), and many organizations are struggling to implement continuous testing effectively. After all, without continuous testing there is no delivery. And Lab-As-A-Service (LaaS) enhances the CT with dynamic on-demand self-serve test topologies. CT together with LAAS make a powerful combination that perfectly serves complex software developm...
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
As more intelligent IoT applications shift into gear, they’re merging into the ever-increasing traffic flow of the Internet. It won’t be long before we experience bottlenecks, as IoT traffic peaks during rush hours. Organizations that are unprepared will find themselves by the side of the road unable to cross back into the fast lane. As billions of new devices begin to communicate and exchange data – will your infrastructure be scalable enough to handle this new interconnected world?
In their Live Hack” presentation at 17th Cloud Expo, Stephen Coty and Paul Fletcher, Chief Security Evangelists at Alert Logic, will provide the audience with a chance to see a live demonstration of the common tools cyber attackers use to attack cloud and traditional IT systems. This “Live Hack” uses open source attack tools that are free and available for download by anybody. Attendees will learn where to find and how to operate these tools for the purpose of testing their own IT infrastructu...
SYS-CON Events announced today that IceWarp will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. IceWarp, the leader of cloud and on-premise messaging, delivers secured email, chat, documents, conferencing and collaboration to today's mobile workforce, all in one unified interface
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a prac...
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
Consumer IoT applications provide data about the user that just doesn’t exist in traditional PC or mobile web applications. This rich data, or “context,” enables the highly personalized consumer experiences that characterize many consumer IoT apps. This same data is also providing brands with unprecedented insight into how their connected products are being used, while, at the same time, powering highly targeted engagement and marketing opportunities. In his session at @ThingsExpo, Nathan Trel...
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
The Internet of Things (IoT) is about the digitization of physical assets including sensors, devices, machines, gateways, and the network. It creates possibilities for significant value creation and new revenue generating business models via data democratization and ubiquitous analytics across IoT networks. The explosion of data in all forms in IoT requires a more robust and broader lens in order to enable smarter timely actions and better outcomes. Business operations become the key driver of I...
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...
It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for ...
SYS-CON Events announced today that Micron Technology, Inc., a global leader in advanced semiconductor systems, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Micron’s broad portfolio of high-performance memory technologies – including DRAM, NAND and NOR Flash – is the basis for solid state drives, modules, multichip packages and other system solutions. Backed by more than 35 years of tech...