Welcome!

News Feed Item

Research Institutions Push the Boundaries of Supercomputing with Dell

At SC13, Dell reaffirmed its long-standing commitment to improve access to and use of high-performance computing (HPC) in research computing. Over the last five years, Dell and its research computing partners have combined integrated server, storage and networking solutions designed specifically for hyperscale and research computing environments with scalable, cost-effective usage models such as HPC-as-a-Service and HPC in the cloud to simplify collaborative science, improve access to compute capacity and accelerate discovery for the research computing community.

Earlier this year, Dell took its commitment a step further, introducing Active Infrastructure for HPC Life Sciences, a converged solution designed specifically for genomics analysis—a very specialized and rapidly growing area of research computing. The new solution integrates computing, storage and networking to reduce lengthy implementation timelines and process up to 37 genomes per day and 259 genomes per week.

Oak Ridge National Laboratory, University of California at San Diego, The University of Texas at Austin, University of Florida, Clemson University, University of Wisconsin at Madison and Stanford University are a few of the hundreds of organizations utilizing Dell’s HPC solutions today to harness the power of data for discovery.

Oak Ridge National Laboratory Supercomputer Achieves I/O Rate of More Than One Terabyte Per Second

To boost the productivity of its Titan supercomputer—the fastest computer in America dedicated solely to scientific research—and better support its 1,200 users and more than 150 research projects, the Oak Ridge National Laboratory (ORNL) Leadership Computing Facility needed a file system with the high speed interconnects that would match the supercomputer’s peak theoretical performance of 27 petaflops or 27,000 trillion calculations per second. Working with Dell and other technology partners, ORNL upgraded its Lustre-based file system Spider” to Spider II to quadruple the size and speed of its file system. It also upgraded the interconnects between Titan and Spider to a new InfiniBand fourteen data rate (FDR) network that can or is designated to be seven times faster and support an I/O rate in excess of one terabyte per second.

The University of California, San Diego Deploying XSEDE’s First Virtualized HPC Cluster with Comet

The San Diego Supercomputer Center (SDSC) at the University of California, San Diego is deploying Comet, a new virtualized petascale supercomputer designed to fulfill pent-up demand for computing in areas such as social sciences and genomics, areas where there is a growing need for computing capacity for a broader set of researchers. Funded by a $12 million NSF grant and scheduled to start operations in early 2015, Comet will be a Dell-based cluster featuring next-generation Intel Xeon processors. With peak performance of nearly two petaflops and the first XSEDE production system to support high-performance virtualization, Comet will be uniquely designed to support many modest-scale jobs: each node will be equipped with two processors, 128 gigabytes (GB) of traditional DRAM and 320 GB of flash memory. Comet will also include some large-scale nodes as well as nodes with NVIDIA GPUs to support visualization, molecular dynamic simulations or genome assembly.

Comet is all about HPC for the 99 percent,” said SDSC Director Michael Norman, Comet principal investigator. “As the world’s first virtualized HPC cluster, Comet is designed to deliver a significantly increased level of computing capacity and customizability to support data-enabled science and engineering at the campus, regional and national levels.”

The University of Texas at Austin to Deploy Wrangler, An Innovative New Data System

The Texas Advanced Computing Center (TACC) at The University of Texas at Austin recently announced plans to build Wrangler, a groundbreaking data analysis and management system for the national open science community that will be funded by a $6 million National Science Foundation (NSF) grant. Featuring 20 petabytes of storage on the Dell C8000 platform and using PowerEdge R620 and R720 compute nodes, Wrangler is designed for high-performance access to community data sets. It will support the popular MapReduce software framework and a full ecosystem of analytics for Big Data when completed in January 2015. Wrangler will integrate with TACC’s Stampede supercomputer and through TACC will be extended to NSF Extreme Science and Engineering Discovery Environment (XSEDE) resources around the country.

Wrangler is designed from the ground up for emerging and existing applications in data intensive science,” said Dan Stanzione, Wrangler’s lead principal investigator and TACC deputy director. “Wrangler will be one of the largest secure, replicated storage options for the national open science community.”

Dell at SC13

Hear from experts from the University of Florida, Clemson University, University of North Texas, University of Wisconsin at Madison and Stanford University about how they are harnessing the power of data for discovery at the “Solving the HPC Data Deluge” session on Nov. 20, 1:30-2:30 p.m. at Dell Booth #1301. And learn about HPC virtualization from the University of California at San Francisco, Florida State University, Cambridge University, Oklahoma University and Australian National University from 3-4 p.m. For more information on Dell’s presence at SC13 visit this blog, and follow the conversation at HPCatDell.

Dell World

Join us at Dell World 2013, Dell’s premier customer event exploring how technology solutions and services are driving business innovation. Learn more at www.dellworld.com, attend our virtual Dell World: Live Online event or follow #DellWorld on Twitter.

More Stories By Business Wire

Copyright © 2009 Business Wire. All rights reserved. Republication or redistribution of Business Wire content is expressly prohibited without the prior written consent of Business Wire. Business Wire shall not be liable for any errors or delays in the content, or for any actions taken in reliance thereon.

Latest Stories
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
Connecting to major cloud service providers is becoming central to doing business. But your cloud provider’s performance is only as good as your connectivity solution. Massive Networks will place you in the driver's seat by exposing how you can extend your LAN from any location to include any cloud platform through an advanced high-performance connection that is secure and dedicated to your business-critical data. In his session at 21st Cloud Expo, Paul Mako, CEO & CIO of Massive Networks, wil...
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
SYS-CON Events announced today that Secure Channels, a cybersecurity firm, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Secure Channels, Inc. offers several products and solutions to its many clients, helping them protect critical data from being compromised and access to computer networks from the unauthorized. The company develops comprehensive data encryption security strategie...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics ...
Cloud adoption is often driven by a desire to increase efficiency, boost agility and save money. All too often, however, the reality involves unpredictable cost spikes and lack of oversight due to resource limitations. In his session at 20th Cloud Expo, Joe Kinsella, CTO and Founder of CloudHealth Technologies, tackled the question: “How do you build a fully optimized cloud?” He will examine: Why TCO is critical to achieving cloud success – and why attendees should be thinking holistically ab...
yperConvergence came to market with the objective of being simple, flexible and to help drive down operating expenses. It reduced the footprint by bundling the compute/storage/network into one box. This brought a new set of challenges as the HyperConverged vendors are very focused on their own proprietary building blocks. If you want to scale in a certain way, let’s say you identified a need for more storage and want to add a device that is not sold by the HyperConverged vendor, forget about it....
As businesses adopt functionalities in cloud computing, it’s imperative that IT operations consistently ensure cloud systems work correctly – all of the time, and to their best capabilities. In his session at @BigDataExpo, Bernd Harzog, CEO and founder of OpsDataStore, presented an industry answer to the common question, “Are you running IT operations as efficiently and as cost effectively as you need to?” He then expounded on the industry issues he frequently came up against as an analyst, and ...
Docker containers have brought great opportunities to shorten the deployment process through continuous integration and the delivery of applications and microservices. This applies equally to enterprise data centers as well as the cloud. In his session at 20th Cloud Expo, Jari Kolehmainen, founder and CTO of Kontena, discussed solutions and benefits of a deeply integrated deployment pipeline using technologies such as container management platforms, Docker containers, and the drone.io Cl tool. H...
The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyer belt between the Software Factory and production stages. Artifacts are m...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, will introduce two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a...
SYS-CON Events announced today that App2Cloud will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. App2Cloud is an online Platform, specializing in migrating legacy applications to any Cloud Providers (AWS, Azure, Google Cloud).
Cloud resources, although available in abundance, are inherently volatile. For transactional computing, like ERP and most enterprise software, this is a challenge as transactional integrity and data fidelity is paramount – making it a challenge to create cloud native applications while relying on RDBMS. In his session at 21st Cloud Expo, Claus Jepsen, Chief Architect and Head of Innovation Labs at Unit4, will explore that in order to create distributed and scalable solutions ensuring high availa...
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...