Welcome!

Related Topics: Linux Containers

Linux Containers: Article

Scaling Linux to the Extreme

Superior performance and stability in all environments

Previous notions of limited scalability of Linux were abruptly changed last year by the introduction of the SGI Altix server, which scaled up to 64 processors within a single system image (SSI). Today, large-scale Linux servers with hundreds of processors are being deployed by a variety of businesses, universities, research centers, and governments around the world. NASA Ames Research Center, for example, continues to push the limits even further with their 512-processor system running a single instance of the Linux kernel.

This article examines the challenges in enabling large numbers of processors to work efficiently together to better support Linux system configurations for High- Performance Computing (HPC) environments. We will explain what scaling is, the importance of good hardware design, and the kernel changes that make scaling Linux on systems up to 256 processors and beyond possible. Finally, we will show examples of how these highly scalable Linux systems are being used to solve complex real-world problems more efficiently.

Scaling Within HPC Environments

First, let's examine the issues behind system scalability. The term scaling refers to the ability to add more hardware resources, such as processors or memory, to improve the capacity and performance of a system. There are different strategies used for scaling systems depending on the workload requirements. Enterprise business server workloads, for example, often consist of many individual, unrelated tasks that are typically deployed on systems that are smaller in nature and networked together. HPC workloads, on the other hand, are composed of scientific programs that require a high degree of complex processing, process large amounts of data, and have widely fluctuating resource requirements. Because of their demanding resource requirements, HPC programs are written and parallelized to break complex problems down to enable them to leverage system resources in parallel.

One approach used to solve HPC problems is horizontal scaling. With this approach, a program's threads run across a "cluster" of separate systems, and these threads communicate and exchange data over the network. This strategy can be used for workloads that are embarrassingly parallel, where little communication is required between program threads as they perform their computations. However, when program threads need to interact while working on a common set of data, vertical scaling provides a more efficient and better approach. With vertical scaling, threads run on a large number of CPUs all within one system, enabling processors to communicate more efficiently and to also operate upon and exchange data using global shared memory. Adding more processors to the system enables more threads to run simultaneously, thereby enabling more resources to be applied and shared to solve a problem. Vertical scaling also provides an ideal environment for using an HPC system as a central server to dynamically run different HPC programs at the same time when any one program either doesn't actually need all of the system processors or has its own scaling limitations. Whether greater processing capability for a single HPC program is required, or increasing throughput for several different HPC programs running at once, a properly designed vertically scaled system provides a flexible and superior environment for both the most demanding and the widest range of HPC applications.

Hardware Design and Scalability

Perfect scaling occurs when the number of processors added improves the workload throughput by the same factor. For instance, a four-processor system should theoretically improve processing power fourfold compared to a single processor system. In a multiprocessor system, it is critical to minimize the overhead involved with coordinating among multiple processors and utilizing shared resources. We say, "the system is scaling linearly at 90 percent up to 4 processors" if adding a second processor improves system performance by 1.8X, adding a third processor yields a 2.7X improvement, and adding a fourth processor yields an improvement of 3.6X over a single CPU. As more processors are added to a system, often a point is reached where performance no longer improves or even decreases due to hardware, kernel, or application software limitations. The goal is to improve performance by enabling multiple CPUs to scale as close to perfect as possible, and to the highest possible numbers of CPUs.

One of the keys to obtaining maximum performance is a fast system bus with high bandwidth. The extreme processing power provided by hundreds of high-performance CPUs requires multiple fast paths for handling data between CPUs, caches, memory, and I/O. The system bus found on symmetric multiprocessing systems can quickly become a bottleneck since all traffic from the CPUs uses a single, common bus to access and transfer data. Much higher system performance is available using a non-uniform memory access (NUMA) architecture since CPU accesses to memory within the same node will distribute and reduce the load on the system interconnect (see Figure 1).

A well-designed NUMA system will carefully account for the CPU bus transfer speeds, number of CPUs on any given bus, memory transfer speeds, multiple paths, and other factors to ensure that maximum overall bandwidth can be delivered throughout the system. Drawing an imaginary line through the middle of a system to examine its maximum capacity for transferring data between two halves is called bisectional bandwidth. Figure 2 shows the system bus interconnect for an SGI Altix system designed for overall maximum bisectional bandwidth and performance. In this diagram, each C-brick is a rack-mountable module containing four CPUs and each R-brick is an SGI NUMAlink module used to connect together and make a 128p SGI Altix system.

A computer architecture that is well balanced and built for maximum performance is essential to achieving good system scalability. If the hardware doesn't scale, neither will the Linux kernel or the user's application.

Linux Kernel Scalability

Linux was originally designed for smaller systems. Extending Linux to scale well on large systems involves extending various sizes and tables managed by the kernel, and then optimizing the performance for high-end technical computing. Thanks to the solid design and wide community support, Linux has adapted well to large systems.

SGI kernel engineers found that while they were clearly the first to run Linux on large system configurations of this kind, the Linux community had already done an excellent job reworking and addressing many of the issues related to Linux scalability. The types of changes made by SGI and others within the community include extending resource counters sizes, extending bit-mask sizes, and fixing commands and tools to support more than double-digit CPU numbers. Other changes included adding NUMA tool commands to help manage larger memory sizes more efficiently, increasing the limit on open file descriptors and on file sizes, and reducing boot time console messages generated by each processor, since administrating and troubleshooting would otherwise be unmanageable on systems with large CPU counts.

Once the kernel was modified to accommodate the resources of a larger system, SGI engineers focused on getting Linux to scale and perform well. One way to find scaling problems for a 256-processor system is to turn up the stress knobs while using a much larger configuration, such as a 512-processor system. Problems that otherwise would be difficult to pinpoint become obvious. Developing and testing on these larger configurations enabled the SGI engineering team to find and fix many problems that affect all multiprocessor systems of all sizes. SGI kernel engineers used several large configurations in this manner to run a variety of different HPC applications, benchmarks, and custom tests to identify and diagnose Linux scaling problems. Figure 3 shows an early 512 processor SGI Altix system, ascender, which was used by SGI kernel engineers to find and fix scaling problems.

Such testing uncovered a number of areas to change for improving scalability. For example, some system-wide kernel variables were converted to per-processor variables. This reduces memory contention on shared data such as global kernel performance statistics, since this data could be maintained separately, then combined only when needed for reporting purposes. Other scaling improvements included finding and eliminating high-contention spinlocks, reducing spinlock contention in timer routines, optimizing process scheduling algorithms, changes in the buffer cache to use per-node data structures, improved translation lookaside buffer algorithms, improved parallelism of page fault and out-of-memory handling, and identifying and removing hot cache lines due to false sharing.

Bringing It All Together

A well-designed hardware system combined with the Linux optimizations described here enables hundreds of processors within a system to access, use, and manipulate shared resources in the most efficient manner possible, enabling users' HPC programs to fully exploit the available system resources to do real work. The following three examples demonstrate the dramatic scaling and performance improvements being achieved with Linux on systems with processor counts of 128, 256, and larger.

The first example (see Figure 4) shows how adding processors to a system can dramatically reduce the elapsed time for the bioinformatics HPC application HTC-BLAST (High Throughput Computing - Basic Logical Alignment Search Tool) to process 10,000 queries with 4,111,677 total letters on a human genome database with 545 sequences and 2,866,452,029 total letters. In particular, notice that a system with 128 processors ran 1.77X faster than a system with 64 processors.

The next example (see Figure 5) shows the scaling and performance improvements achieved using a computation fluid dynamics application on an automobile external flow problem with a model size of 100 million cells. In this case the total elapsed time continues to decrease as the system configuration is extended from 64 to 256 processors.

Finally, the third example (see Figure 6) shows scaling results for an OpenMP code called Cart3D, developed and used extensively by the NASA Ames Research Center to study flows for the space shuttle. NASA Ames Research Center, known for pushing the limits of computing in pursuit of fundamental science, achieved almost 90% scaling efficiency while running this HPC code on a 512-processor SGI Altix system. SGI and NASA engineers collaborated to identify and fix many Linux scaling issues to achieve a dramatic new breakthrough on system scalability with Linux. The NASA Ames Research Center's system used for this work is shown in Figure 7.

Summary

The performance and capabilities of Linux for server environments have improved dramatically in just the last year. Scientists and others are now routinely using single-system Linux configurations with hundreds of processors to solve complex problems faster and with greater ease than had been thought possible. Testing and developing on these large configurations have proven invaluable for improving the reliability and performance of Linux on configurations of all sizes. The synergy of these scaling improvements combined with the open development model has enabled the continued advancement of Linux to become the superior operating system choice for delivering performance and stability in all environments.

More Stories By Steve Neuner

Steve Neuner is the engineering director for Linux at SGI and has been working on Linux for the past 5 years. He's been developing operating system software for system hardware manufacturers for the past 20 years.

More Stories By Dan Higgins

Dan Higgins has worked in the computer industry for 26 years in a variety of technical roles. Dan has been with SGI for the past 17 years and currently manages the Linux kernel scalability and RAS (Reliability, Availability and Serviceability) engineering team.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
Data is an unusual currency; it is not restricted by the same transactional limitations as money or people. In fact, the more that you leverage your data across multiple business use cases, the more valuable it becomes to the organization. And the same can be said about the organization’s analytics. In his session at 19th Cloud Expo, Bill Schmarzo, CTO for the Big Data Practice at EMC, will introduce a methodology for capturing, enriching and sharing data (and analytics) across the organizati...
DevOps at Cloud Expo – being held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...
Traditional on-premises data centers have long been the domain of modern data platforms like Apache Hadoop, meaning companies who build their business on public cloud were challenged to run Big Data processing and analytics at scale. But recent advancements in Hadoop performance, security, and most importantly cloud-native integrations, are giving organizations the ability to truly gain value from all their data. In his session at 19th Cloud Expo, David Tishgart, Director of Product Marketing ...
Fact: storage performance problems have only gotten more complicated, as applications not only have become largely virtualized, but also have moved to cloud-based infrastructures. Storage performance in virtualized environments isn’t just about IOPS anymore. Instead, you need to guarantee performance for individual VMs, helping applications maintain performance as the number of VMs continues to go up in real time. In his session at Cloud Expo, Dhiraj Sehgal, Product and Marketing at Tintri, wil...
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - comp...
StarNet Communications Corp has announced the addition of three Secure Remote Desktop modules to its flagship X-Win32 PC X server. The new modules enable X-Win32 to safely tunnel the remote desktops from Linux and Unix servers to the user’s PC over encrypted SSH. Traditionally, users of PC X servers deploy the XDMCP protocol to display remote desktop environments such as the Gnome and KDE desktops on Linux servers and the CDE environment on Solaris Unix machines. XDMCP is used primarily on comp...
SYS-CON Events announced today that StarNet Communications will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. StarNet Communications’ FastX is the industry first cloud-based remote X Windows emulator. Using standard Web browsers (FireFox, Chrome, Safari, etc.) users from around the world gain highly secure access to applications and data hosted on Linux-based servers in a central data center. ...
SYS-CON Events announced today Telecom Reseller has been named “Media Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
SYS-CON Events announced today that Adobe has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. Adobe is changing the world though digital experiences. Adobe helps customers develop and deliver high-impact experiences that differentiate brands, build loyalty, and drive revenue across every screen, including smartphones, computers, tablets and TVs. Adobe content solutions are used daily by millions of co...
Why do your mobile transformations need to happen today? Mobile is the strategy that enterprise transformation centers on to drive customer engagement. In his general session at @ThingsExpo, Roger Woods, Director, Mobile Product & Strategy – Adobe Marketing Cloud, covered key IoT and mobile trends that are forcing mobile transformation, key components of a solid mobile strategy and explored how brands are effectively driving mobile change throughout the enterprise.
Pulzze Systems was happy to participate in such a premier event and thankful to be receiving the winning investment and global network support from G-Startup Worldwide. It is an exciting time for Pulzze to showcase the effectiveness of innovative technologies and enable them to make the world smarter and better. The reputable contest is held to identify promising startups around the globe that are assured to change the world through their innovative products and disruptive technologies. There w...
SYS-CON Events announced today that Pulzze Systems will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Pulzze Systems, Inc. provides infrastructure products for the Internet of Things to enable any connected device and system to carry out matched operations without programming. For more information, visit http://www.pulzzesystems.com.
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
SYS-CON Events announced today that Hitrons Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Hitrons Solutions Inc. is distributor in the North American market for unique products and services of small and medium-size businesses, including cloud services and solutions, SEO marketing platforms, and mobile applications.