Welcome!

News Feed Item

Cray Launches New High Density Cluster Packed With NVIDIA GPU Accelerators

SEATTLE, WA -- (Marketwired) -- 08/26/14 -- Global supercomputer leader Cray Inc. (NASDAQ: CRAY) today announced the launch of the Cray CS-Storm -- a high-density accelerator compute system based on the Cray® CS300™ cluster supercomputer. Featuring up to eight NVIDIA® Tesla® GPU accelerators and a peak performance of more than 11 teraflops per node, the Cray CS-Storm system is one of the most powerful single-node cluster architectures available today.

Designed to support highly scalable applications in areas such as energy, life sciences, financial services, and geospatial intelligence, the Cray CS-Storm provides exceptional performance, energy efficiency and reliability within a small footprint. The system leverages the supercomputing architecture of the air-cooled Cray CS300 system, and includes the Cray Advanced Cluster Engine cluster management software, the complete Cray Programming Environment on CS, and NVIDIA Tesla K40 GPU accelerators. The Cray CS-Storm system includes Intel® Xeon E5 2600 v2 processors.

"With an impressive eight-to-two ratio of GPUs to CPUs, the Cray CS-Storm is an absolute beast of a system," said Barry Bolding, Cray's vice president of marketing and business development. "The Cray CS-Storm is built to meet the most demanding compute requirements for production scalability, while also delivering a lower total-cost-of-ownership for customers with accelerator workload environments. With the combination of an extremely efficient cooling infrastructure, Cray's high-productivity cluster software environment and powerful NVIDIA K40 accelerators, the Cray CS-Storm is designed to be a production workhorse for accelerator-based applications in important areas such as seismic simulation, machine learning and scientific computing."

"Tesla K40 GPU accelerators bring extreme performance to a broad range of HPC and enterprise analytics applications," said Sumit Gupta, general manager of Accelerated Computing at NVIDIA. "By combining up to eight Tesla K40 accelerators per node, the ultra-high-density Cray CS300 system dramatically increases the performance levels customers can tap to drive innovation and discovery in seismic imaging, cybersecurity, deep learning, and many other scientific computing areas."

The Cray CS-Storm system is available in flexible configurations. Each 48U standard rack can hold 22 2U compute servers, each with up to eight GPUs and two CPUs delivering more than 250 teraflops per rack. A four cabinet Cray CS-Storm system is capable of delivering more than one petaflop of peak performance.

The Cray CS-Storm system is also available with a comprehensive HPC cluster software stack with tools that are compatible with open source and commercial compilers, debuggers, schedulers and libraries. The Cray Programming Environment on CS, which includes Cray's Compiling Environment, Cray Scientific and Math Libraries, and Cray Performance, Measurement, Analysis and Porting Tools, are available on the Cray CS-Storm system and is specifically tuned for high performance GPU computing.

"Adding a GPU-dense, air-cooled option to the already-extensive lineup of Cray's CS300 offerings will further expand the market for this standards-based cluster supercomputer product," said Steve Conway, IDC research vice president for high performance computing. "IDC research showed that the proportion of sites employing accelerators and other coprocessors in their HPC systems jumped from 28.2 percent in 2011 to 76.9 percent in 2013, and GPUs are the clear leaders in this category. Cray has been on a roll, and ramped-up sales of its Cray CS300 line have helped."

The Cray CS300 series of cluster supercomputers are scalable, cluster solutions that group optimized, industry-standard building block server platforms into a unified, fully-integrated system. Available with air or liquid-cooled architectures, Cray CS300 systems provide superior price/performance, energy efficiency and configuration flexibility. The systems are integrated with Cray's HPC cluster software stack and include software tools compatible with most open source and commercial compilers, schedulers, and libraries. Cray CS300 systems also feature the Cray Advanced Cluster Engine, an essential management software suite designed to provide network, server, cluster and storage management capabilities that are necessary to run large, complex technical applications.

More information on the Cray CS-Storm system is available at www.cray.com.

About Cray Inc.
Global supercomputing leader Cray Inc. (NASDAQ: CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world's most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray's Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market's continued demand for realized performance. Go to www.cray.com for more information.

Cray is a registered trademark of Cray Inc. in the United States and other countries, and CS300 is a trademark of Cray Inc. Other product and service names mentioned herein are the trademarks of their respective owners.

More Stories By Marketwired .

Copyright © 2009 Marketwired. All rights reserved. All the news releases provided by Marketwired are copyrighted. Any forms of copying other than an individual user's personal reference without express written permission is prohibited. Further distribution of these materials is strictly forbidden, including but not limited to, posting, emailing, faxing, archiving in a public database, redistributing via a computer network or in a printed form.

Latest Stories
"We build IoT infrastructure products - when you have to integrate different devices, different systems and cloud you have to build an application to do that but we eliminate the need to build an application. Our products can integrate any device, any system, any cloud regardless of protocol," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busin...
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, provided an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data professionals...
"Once customers get a year into their IoT deployments, they start to realize that they may have been shortsighted in the ways they built out their deployment and the key thing I see a lot of people looking at is - how can I take equipment data, pull it back in an IoT solution and show it in a dashboard," stated Dave McCarthy, Director of Products at Bsquare Corporation, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Fact is, enterprises have significant legacy voice infrastructure that’s costly to replace with pure IP solutions. How can we bring this analog infrastructure into our shiny new cloud applications? There are proven methods to bind both legacy voice applications and traditional PSTN audio into cloud-based applications and services at a carrier scale. Some of the most successful implementations leverage WebRTC, WebSockets, SIP and other open source technologies. In his session at @ThingsExpo, Da...
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
The cloud competition for database hosts is fierce. How do you evaluate a cloud provider for your database platform? In his session at 18th Cloud Expo, Chris Presley, a Solutions Architect at Pythian, gave users a checklist of considerations when choosing a provider. Chris Presley is a Solutions Architect at Pythian. He loves order – making him a premier Microsoft SQL Server expert. Not only has he programmed and administered SQL Server, but he has also shared his expertise and passion with b...
As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...
"IoT is going to be a huge industry with a lot of value for end users, for industries, for consumers, for manufacturers. How can we use cloud to effectively manage IoT applications," stated Ian Khan, Innovation & Marketing Manager at Solgeniakhela, in this SYS-CON.tv interview at @ThingsExpo, held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Onalytica. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Predictive analytics tools monitor, report, and troubleshoot in order to make proactive decisions about the health, performance, and utilization of storage. Most enterprises combine cloud and on-premise storage, resulting in blended environments of physical, virtual, cloud, and other platforms, which justifies more sophisticated storage analytics. In his session at 18th Cloud Expo, Peter McCallum, Vice President of Datacenter Solutions at FalconStor, discussed using predictive analytics to mon...