Welcome!

News Feed Item

Adaptive Computing Unveils Moab 8.0 to Enhance Technical Computing Environments and Big Workflow

Adaptive Computing, the company that powers many of the world’s largest private/hybrid cloud and technical computing environments with its Moab optimization and scheduling software, today announced Moab HPC Suite-Enterprise Edition 8.0 (Moab 8.0) will be generally available within 30 days with sneak peak demos in booth No. 710 at the International Supercomputing Conference (ISC) 2014 from June 22–26, 2014 in Leipzig, Germany. The new features include significant updates for managing and optimizing workloads across technical computing environments. Moab HPC Suite-Enterprise Edition 8.0 also enhances Big Workflow by processing intensive simulations and big data analysis to accelerate insights.

“This latest version of Moab underscores our commitment to innovation in the technical computing sectors,” said Rob Clyde, CEO at Adaptive Computing. “HPC’s powerful engine is at the core of extracting insights from big data, and these updates will enable enterprises to capitalize on HPC’s convergence with cloud and big data to garner faster insights for data-driven decisions.”

Adaptive’s Big Workflow solution delivers dynamic scheduling, provisioning and management of multi-step/multi-application services across HPC, cloud and big data environments. Moab 8.0 bolsters Big Workflow’s core services: unifying data center resources, optimizing the analysis process and guaranteeing services to the business.

Key updates to Moab 8.0 include the following:

Unify Data Center Resources

Adaptive Computing continues to innovate new ways to break down siloed environments to speed the time to discovery. With Adaptive’s Big Workflow innovations, users can utilize all available resources across multiple platforms, environments and locations, managing them as a single ecosystem. Moab 8.0 enhances resource unification such as its new OpenStack Integration available for selected beta, offers virtual and physical resource provisioning for Information as a Service and Platform as a Service.

“In our end-user surveys, Adaptive Computing's Moab and TORQUE, were the top two job management packages named, with 40% of the mentions combined,” comments Addison Snell, CEO of Intersect360 Research. “Organizations are investing in big data and HPC; more than half the respondents in our most recent study were spending at least 10% of their IT budgets on Big Data projects. With Adaptive’s Big Workflow, the general idea is to provide a way for big data, HPC, and cloud environments to interoperate, and do so dynamically based on what applications are running. With the added benefits of a unified platform, OpenStack is a promising platform to interoperate multiple environments.”

Optimize the Analysis Process

Massive Performance enhancements in workload optimization streamline the analytical process, which increases throughput and productivity as well as reduces cost, complexity and errors. These new optimization features include:

  • Performance Boost Moab 8.0 enables users to achieve up to three times improvement in overall workload optimization performance. In order to achieve three times the scale and performance, Moab 8.0 offers the following improvements:
    • Reduced Command Latency – Through a combination of cached data and more efficient use of background threads, it is now possible to submit read-only commands and get an answer within seconds.
    • Decreased Scheduling Cycle Time – New placement decisions are now three and six times faster.
    • Improved Multi-threading – Due to increased parallelism using multi-threading, Moab now scales up with hardware—the more CPU horsepower dedicated to the scheduler, the faster Moab goes by making full use of multiple cores during its scheduling cycle.
    • Faster Moab and TORQUE Job Communication – Moab now communicates newly submitted jobs to TORQUE using a more efficient API.
    • Advanced High Throughput Computing - Nitro delivers 100 times faster job throughput for short computing jobs. Nitro is now generally available, with Beta version previously announced in November 2013 under the code name Moab Task Manager. Nitro is a stand-alone product and is available for a FREE trial.
  • Advanced Power Management – Moab 8.0 creates energy cost savings up to 15-30 percent with new clock frequency control and additional power state options. With clock frequency control, administrators can adjust CPU speeds to align with workload processing speeds through job templates. In addition, administrators can manage multiple power states and automatically place compute nodes in new low-power or no-power states (suspend, hibernation and shutdown modes) when nodes are idle.
  • Advanced Workflow Data Staging – Moab 8.0 updates the current data staging model in Moab by running data staging in a scheduling out-of-band process. This enables improved cluster utilization, multiple transfer methods and new transfer types, more reliable job execution time consistency and allows Moab to more effectively account for data staging in resource scheduling decisions within Grid environments.

Guarantee Services to the Business

Improved features allow the data center to ensure SLAs, maximize uptime and prove services were delivered and resources were allocated fairly. Moab 8.0 offers an enhanced Web-based graphical user interface called Moab Viewpoint™. Viewpoint is the next generation of Adaptive’s administrative dashboard that today monitors and reports workload and resource utilization.

About Adaptive Computing

Adaptive Computing powers many of the world’s largest private/hybrid cloud and technical computing environments with its award-winning Moab optimization and scheduling software. Moab enables large enterprises in oil and gas, financial, manufacturing, and research as well as academic and government to perform simulations and analyze Big Data faster, more accurately and most cost effectively with its Technical Computing, Cloud and Big Data solutions for Big Workflow applications. Moab gives users a competitive advantage, inspiring them to develop cancer-curing treatments, discover the origins of the universe, lower energy prices, manufacture better products, improve the economic landscape and pursue game-changing endeavors. Adaptive is a pioneer in private/hybrid cloud, technical computing and big data, holding 50+ issued or pending patents. Adaptive’s flagship products include:

Moab Cloud Suite for self-optimizing cloud management

Moab HPC Suite for self-optimizing HPC workload management

Moab Big Workflow Solution

For more information, call (801) 717-3700 or visit www.adaptivecomputing.com.

NOTE TO EDITORS: If you would like additional information on Adaptive Computing and its products, please visit the Adaptive Computing Newsroom at http://www.adaptivecomputing.com/category/news/. All prices noted are in U.S. dollars and are valid only in the United States.

More Stories By Business Wire

Copyright © 2009 Business Wire. All rights reserved. Republication or redistribution of Business Wire content is expressly prohibited without the prior written consent of Business Wire. Business Wire shall not be liable for any errors or delays in the content, or for any actions taken in reliance thereon.

Latest Stories
Historically, some banking activities such as trading have been relying heavily on analytics and cutting edge algorithmic tools. The coming of age of powerful data analytics solutions combined with the development of intelligent algorithms have created new opportunities for financial institutions. In his session at 20th Cloud Expo, Sebastien Meunier, Head of Digital for North America at Chappuis Halder & Co., discussed how these tools can be leveraged to develop a lasting competitive advantage ...
As businesses adopt functionalities in cloud computing, it’s imperative that IT operations consistently ensure cloud systems work correctly – all of the time, and to their best capabilities. In his session at @BigDataExpo, Bernd Harzog, CEO and founder of OpsDataStore, presented an industry answer to the common question, “Are you running IT operations as efficiently and as cost effectively as you need to?” He then expounded on the industry issues he frequently came up against as an analyst, and ...
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, will provide a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to ...
"We're a cybersecurity firm that specializes in engineering security solutions both at the software and hardware level. Security cannot be an after-the-fact afterthought, which is what it's become," stated Richard Blech, Chief Executive Officer at Secure Channels, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
SYS-CON Events announced today that Massive Networks will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Massive Networks mission is simple. To help your business operate seamlessly with fast, reliable, and secure internet and network solutions. Improve your customer's experience with outstanding connections to your cloud.
Given the popularity of the containers, further investment in the telco/cable industry is needed to transition existing VM-based solutions to containerized cloud native deployments. The networking architecture of the solution isolates the network traffic into different network planes (e.g., management, control, and media). This naturally makes support for multiple interfaces in container orchestration engines an indispensable requirement.
Everything run by electricity will eventually be connected to the Internet. Get ahead of the Internet of Things revolution and join Akvelon expert and IoT industry leader, Sergey Grebnov, in his session at @ThingsExpo, for an educational dive into the world of managing your home, workplace and all the devices they contain with the power of machine-based AI and intelligent Bot services for a completely streamlined experience.
Because IoT devices are deployed in mission-critical environments more than ever before, it’s increasingly imperative they be truly smart. IoT sensors simply stockpiling data isn’t useful. IoT must be artificially and naturally intelligent in order to provide more value In his session at @ThingsExpo, John Crupi, Vice President and Engineering System Architect at Greenwave Systems, will discuss how IoT artificial intelligence (AI) can be carried out via edge analytics and machine learning techn...
FinTechs use the cloud to operate at the speed and scale of digital financial activity, but are often hindered by the complexity of managing security and compliance in the cloud. In his session at 20th Cloud Expo, Sesh Murthy, co-founder and CTO of Cloud Raxak, showed how proactive and automated cloud security enables FinTechs to leverage the cloud to achieve their business goals. Through business-driven cloud security, FinTechs can speed time-to-market, diminish risk and costs, maintain continu...
SYS-CON Events announced today that Datera, that offers a radically new data management architecture, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Datera is transforming the traditional datacenter model through modern cloud simplicity. The technology industry is at another major inflection point. The rise of mobile, the Internet of Things, data storage and Big...
Consumers increasingly expect their electronic "things" to be connected to smart phones, tablets and the Internet. When that thing happens to be a medical device, the risks and benefits of connectivity must be carefully weighed. Once the decision is made that connecting the device is beneficial, medical device manufacturers must design their products to maintain patient safety and prevent compromised personal health information in the face of cybersecurity threats. In his session at @ThingsExpo...
Existing Big Data solutions are mainly focused on the discovery and analysis of data. The solutions are scalable and highly available but tedious when swapping in and swapping out occurs in disarray and thrashing takes place. The resolution for thrashing through machine learning algorithms and support nomenclature is through simple techniques. Organizations that have been collecting large customer data are increasingly seeing the need to use the data for swapping in and out and thrashing occurs ...