Welcome!

News Feed Item

RMS Announces RMS(one)™ Will Be Generally Available April 15

RMS, the world's leading catastrophe-risk management firm, today announced that it will release version 1.0 of RMS(one), the company’s exposure and risk-management platform, on April 15, 2014. All RMS catastrophe models, as well as select models from RMS ecosystem partners, will be available on RMS(one) version 1.0. In addition, only RMS(one) sets the stage for the new generation of high-definition models.

"RMS is on track to deliver a platform that will enable us to develop a more insightful and robust view of risk – empowering management, underwriters and analysts with the information they need, faster than ever before," said David McComas, senior vice president, ERM, Tokio Millennium Re Ltd. "In today’s competitive marketplace it’s imperative that we have an in-depth understanding of risk information across a diverse range of data sources."

Leading up to the April 15, 2014 general availability (GA) date, RMS worked with 1,000 people from more than 30 leading re/insurers and brokers worldwide to develop, test and validate RMS(one) through the company’s Joint Development Partner (JDP), Early Access Partner (EAP) and FastStart programs. On Feb. 25, 2014 RMS will launch the last beta test of RMS(one), which includes all of the major functionality that will be available at GA.

"As a true platform for exposure and risk management, our clients understand that RMS(one) is far more than just a next generation of catastrophe-modeling software," said Hemant Shah, co-founder and CEO of RMS. "The on-time release of v1.0 will be a major milestone along a journey towards transformational benefits for our clients, and we are committed to supporting our clients every step of the way, putting them in control of how and when they adopt RMS(one)."

As an open platform, RMS(one) enables clients and other modeling organizations to implement their own catastrophe models so that they can be operated seamlessly alongside RMS models. This ability to run multiple models in a single platform will provide insurers and reinsurers with dramatic improvements in operational efficiency, enabling them to access and use models from a broader range of providers than has ever been feasible.

Joining the RMS ecosystem is Applied Research Associates, Inc. (ARA), which intends to make its U.S. hurricane catastrophe model available on RMS(one). ARA’s state-of-the-art hurricane catastrophe model, HurLoss, is one of only four commercially available models certified by the Florida Commission on Hurricane Loss Projection Methodology. The ARA hurricane model has been used in multiple pioneering studies on wind-loss mitigation for the past 15 years.

"Partnering with RMS to deliver our hurricane model on RMS(one) will provide significant, operational benefits for our clients," said ARA executive vice president, Dr. Lawrence Twisdale. "We are very excited by the opportunity to give our clients a new option for accessing the ARA model while we continue to deliver our model as either an ARA-hosted or client-hosted product."

ARA joins third-party model providers Risk Frontiers, ERN and JBA Risk Management, who collectively are implementing catastrophe models for 40 peril/country combinations on RMS(one). This growing ecosystem of non-RMS models on RMS(one) provides clients with new modeling capabilities not currently available from RMS, as well as alternative views of risk in areas already covered by RMS models.

"We're very excited to see our Australia tropical cyclone model operational in the next RMS(one) beta release," said Dr. Foster Langbein, chief technology officer at Risk Frontiers. "We’ve been really pleased with the smoothness of the implementation process and the relative ease of preparing our models for the platform. We look forward to offering our full suite of models on RMS(one)."

"By empowering our clients to define their own models, use our models and access the models of growing ecosystem of partners, RMS(one) serves as the one true exposure and risk management platform. Existing solutions cannot do this," said Shah.

About RMS

RMS is the world’s leading provider of software, services, and expertise for the quantification and management of catastrophe risk. More than 400 leading insurers, reinsurers, trading companies, and other financial institutions rely on RMS solutions to quantify, manage, and transfer risk. Founded at Stanford University in 1988, RMS serves clients today from offices in the U.S., Bermuda, the U.K., Switzerland, India, China, and Japan. For more information, visit www.rms.com and follow us @RMS_News.

More Stories By Business Wire

Copyright © 2009 Business Wire. All rights reserved. Republication or redistribution of Business Wire content is expressly prohibited without the prior written consent of Business Wire. Business Wire shall not be liable for any errors or delays in the content, or for any actions taken in reliance thereon.

Latest Stories
The technologies behind big data and cloud computing are converging quickly, offering businesses new capabilities for fast, easy, wide-ranging access to data. However, to capitalize on the cost-efficiencies and time-to-value opportunities of analytics in the cloud, big data and cloud technologies must be integrated and managed properly. Pythian's Director of Big Data and Data Science, Danil Zburivsky will explore: The main technology components and best practices being deployed to take advantage...
For years the world's most security-focused and distributed organizations - banks, military/defense agencies, global enterprises - have sought to adopt cloud technologies that can reduce costs, future-proof against data growth, and improve user productivity. The challenges of cloud transformation for these kinds of secure organizations have centered around data security, migration from legacy systems, and performance. In our presentation, we will discuss the notion that cloud computing, properl...
Chris Matthieu is the President & CEO of Computes, inc. He brings 30 years of experience in development and launches of disruptive technologies to create new market opportunities as well as enhance enterprise product portfolios with emerging technologies. His most recent venture was Octoblu, a cross-protocol Internet of Things (IoT) mesh network platform, acquired by Citrix. Prior to co-founding Octoblu, Chris was founder of Nodester, an open-source Node.JS PaaS which was acquired by AppFog and ...
By 2021, 500 million sensors are set to be deployed worldwide, nearly 40x as many as exist today. In order to scale fast and keep pace with industry growth, the team at Unacast turned to the public cloud to build the world's largest location data platform with optimal scalability, minimal DevOps, and maximum flexibility. Drawing from his experience with the Google Cloud Platform, VP of Engineering Andreas Heim will speak to the architecture of Unacast's platform and developer-focused processes.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
The vast majority of businesses now use cloud services, yet many still struggle with realizing the full potential of their IT investments. In particular, small and medium-sized businesses (SMBs) lack the internal IT staff and expertise to fully move to and manage workloads in public cloud environments. Speaker Todd Schwartz will help session attendees better navigate the complex cloud market and maximize their technical investments. The SkyKick co-founder and co-CEO will share the biggest challe...
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the ste...
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will d...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Machine learning provides predictive models which a business can apply in countless ways to better understand its customers and operations. Since machine learning was first developed with flat, tabular data in mind, it is still not widely understood: when does it make sense to use graph databases and machine learning in combination? This talk tackles the question from two ends: classifying predictive analytics methods and assessing graph database attributes. It also examines the ongoing lifecycl...
Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their businesses to accelerate digital transformation success.
Wooed by the promise of faster innovation, lower TCO, and greater agility, businesses of every shape and size have embraced the cloud at every layer of the IT stack – from apps to file sharing to infrastructure. The typical organization currently uses more than a dozen sanctioned cloud apps and will shift more than half of all workloads to the cloud by 2018. Such cloud investments have delivered measurable benefits. But they’ve also resulted in some unintended side-effects: complexity and risk. ...
Everyone wants the rainbow - reduced IT costs, scalability, continuity, flexibility, manageability, and innovation. But in order to get to that collaboration rainbow, you need the cloud! In this presentation, we'll cover three areas: First - the rainbow of benefits from cloud collaboration. There are many different reasons why more and more companies and institutions are moving to the cloud. Benefits include: cost savings (reducing on-prem infrastructure, reducing data center foot print, r...
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.