Welcome!

News Feed Item

Edico Genome Launches DRAGEN Complete Suite On AWS Marketplace, Including Enhanced Second-Generation Germline And Somatic Pipelines

LAS VEGAS, Nov. 30, 2017 /PRNewswire/ -- Today at AWS re:Invent, Edico Genome revealed its DRAGEN Complete Suite (DRAGEN CS) on AWS Marketplace, a comprehensive package of pipelines that enables AWS users to access all of DRAGEN's applications for next-generation sequencing (NGS) data analysis in one click. Pieter van Rooyen, Ph.D., president and CEO at Edico Genome, and Rami Mehio, vice president of engineering at Edico Genome, will discuss DRAGEN CS during a panel titled, "FPGA Accelerated Computing Using Amazon EC2 F1 Instances," held today at 5:45 p.m. PT at re:Invent.

Edico Genome's logo (PRNewsFoto/Edico Genome)

DRAGEN CS includes second-generation versions of the DRAGEN Germline and Somatic Pipelines, which feature enhanced mapping and aligning algorithms and greatly improved variant calling. The new variant calling algorithms are able to better distinguish real variants from errors introduced in sample preparation and sequencing. The DRAGEN Germline V2 Pipeline was recognized as a top performer in the PrecisionFDA Hidden Treasures – Warm Up Challenge in October 2017, receiving the top score in five of six accuracy metrics among 30 participants. 

"By utilizing sample-specific prep and sequencer-error modeling, DRAGEN is able to identify real variants with even greater precision, resulting in enhanced accuracy and accelerated analysis speeds," said Mr. Mehio. "In fact, DRAGEN's INDEL calling, which usually presents a greater challenge for analysis tools than single nucleotide polymorphisms, demonstrated the highest level of accuracy among pipelines identifying hidden variants in the recent PrecisionFDA challenge. With the integration of these cutting-edge algorithms, DRAGEN is now the top performer in accuracy as well as speed."

DRAGEN CS contains tools for all pipeline steps, including mapping/aligning, position sorting, duplicate marking and variant calling. The application accepts sequencing data inputs in BCL, FASTQ, and BAM/CRAM formats, and features BCL conversion, download and upload streaming, and compressed hash tables for a more streamlined and efficient workflow. The Complete Suite offers the following pipelines:

  • DRAGEN Germline V2 Pipeline provides clinical-grade, end-to-end (BCL to VCF) analysis of whole genome, exome and targeted panel NGS data.
  • DRAGEN Somatic V2 Pipeline includes tumor-only and tumor/normal modes, designed for use in detecting somatic variants in tumor samples.
  • DRAGEN RNA Gene Fusion Detection Pipeline performs transcriptome analysis starting with splice junction discovery and alignment, followed by gene fusion detection.
  • DRAGEN Population Calling Pipeline calls variants jointly across multiple genomes and can scale to thousands of samples at expedited speeds.
  • DRAGEN Virtual Long Read Detection (beta release) is an innovative variant caller specialized in mutation detection in segmental duplication regions, such as pseudo-genes. Its accuracy in such regions is equivalent to traditional variant callers, using reads that are 6 to 8 times longer.
  • GATK Best Practices Pipeline allows users to run the GATK pipeline at far greater speeds than possible on central processing units (CPUs).
  • MuTect2 Pipeline is an enhanced version of MuTec2, a somatic SNP and indel caller.

"Edico Genome's goal is to simplify NGS for our users – arming them with industry leading speeds, accuracy and scalability through an easy-to-use interface to streamline their work," said Dr. van Rooyen. "Our engineering team is perpetually working on new algorithms that rapidly and accurately analyze genomic data -- enabling clinicians to get to diagnoses faster, and researchers to get to results more rapidly -- and we look forward to releasing new offerings in the coming months."

DRAGEN leverages field programmable gate arrays (FPGAs) to rapidly accelerate secondary analysis of NGS data both onsite and in the Cloud via AWS Marketplace. Edico Genome recently demonstrated the speed and scalability of DRAGEN on the AWS Marketplace by deploying 1,000 Amazon EC2 F1 instances to analyze 1,000 whole human genome sequences in only 2 hours and 25 minutes, setting the GUINNESS WORLD RECORDSTM title for Fastest time to analyze 1,000 human genomes.

A one-day trial for use of exome and genome pipelines is available to new customers. To learn more about DRAGEN on AWS Marketplace, visit http://edicogenome.com/awsmarketplace/.

Contact Edico Genome

Stephanie Black
Marketing Manager
(858) 722-3694
[email protected] 

Monica May
Canale Communications
(619) 849-5383
[email protected]

About Edico Genome

The use of next-generation sequencing is growing at an unprecedented pace, creating a need for easy to implement infrastructure that enables rapid, accurate and cost-effective processing and storage of this big data. Edico Genome has created a patented, end-to-end platform solution for analysis of next-generation sequencing data, DRAGEN™, which speeds whole genome data analysis from hours to minutes while maintaining high accuracy and reducing costs. Top clinicians and researchers are utilizing the platform to achieve faster diagnoses for critically ill newborns, cancer patients and expecting parents waiting on prenatal tests, and faster results for scientists and drug developers. For more information, visit www.EdicoGenome.com or follow @EdicoGenome.

 

View original content with multimedia:http://www.prnewswire.com/news-releases/edico-genome-launches-dragen-complete-suite-on-aws-marketplace-including-enhanced-second-generation-germline-and-somatic-pipelines-300563853.html

SOURCE Edico Genome

More Stories By PR Newswire

Copyright © 2007 PR Newswire. All rights reserved. Republication or redistribution of PRNewswire content is expressly prohibited without the prior written consent of PRNewswire. PRNewswire shall not be liable for any errors or delays in the content, or for any actions taken in reliance thereon.

Latest Stories
Organizations planning enterprise data center consolidation and modernization projects are faced with a challenging, costly reality. Requirements to deploy modern, cloud-native applications simultaneously with traditional client/server applications are almost impossible to achieve with hardware-centric enterprise infrastructure. Compute and network infrastructure are fast moving down a software-defined path, but storage has been a laggard. Until now.
Serveless Architectures brings the ability to independently scale, deploy and heal based on workloads and move away from monolithic designs. From the front-end, middle-ware and back-end layers, serverless workloads potentially have a larger security risk surface due to the many moving pieces. This talk will focus on key areas to consider for securing end to end, from dev to prod. We will discuss patterns for end to end TLS, session management, scaling to absorb attacks and mitigation techniques.
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determin...
Let’s face it, embracing new storage technologies, capabilities and upgrading to new hardware often adds complexity and increases costs. In his session at 18th Cloud Expo, Seth Oxenhorn, Vice President of Business Development & Alliances at FalconStor, discussed how a truly heterogeneous software-defined storage approach can add value to legacy platforms and heterogeneous environments. The result reduces complexity, significantly lowers cost, and provides IT organizations with improved efficienc...
CI/CD is conceptually straightforward, yet often technically intricate to implement since it requires time and opportunities to develop intimate understanding on not only DevOps processes and operations, but likely product integrations with multiple platforms. This session intends to bridge the gap by offering an intense learning experience while witnessing the processes and operations to build from zero to a simple, yet functional CI/CD pipeline integrated with Jenkins, Github, Docker and Azure...
Fact: storage performance problems have only gotten more complicated, as applications not only have become largely virtualized, but also have moved to cloud-based infrastructures. Storage performance in virtualized environments isn’t just about IOPS anymore. Instead, you need to guarantee performance for individual VMs, helping applications maintain performance as the number of VMs continues to go up in real time. In his session at Cloud Expo, Dhiraj Sehgal, Product and Marketing at Tintri, sha...
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is...
Containers, microservices and DevOps are all the rage lately. You can read about how great they are and how they’ll change your life and the industry everywhere. So naturally when we started a new company and were deciding how to architect our app, we went with microservices, containers and DevOps. About now you’re expecting a story of how everything went so smoothly, we’re now pushing out code ten times a day, but the reality is quite different.
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addres...
You want to start your DevOps journey but where do you begin? Do you say DevOps loudly 5 times while looking in the mirror and it suddenly appears? Do you hire someone? Do you upskill your existing team? Here are some tips to help support your DevOps transformation. Conor Delanbanque has been involved with building & scaling teams in the DevOps space globally. He is the Head of DevOps Practice at MThree Consulting, a global technology consultancy. Conor founded the Future of DevOps Thought Leade...
An edge gateway is an essential piece of infrastructure for large scale cloud-based services. In his session at 17th Cloud Expo, Mikey Cohen, Manager, Edge Gateway at Netflix, detailed the purpose, benefits and use cases for an edge gateway to provide security, traffic management and cloud cross region resiliency. He discussed how a gateway can be used to enhance continuous deployment and help testing of new service versions and get service insights and more. Philosophical and architectural ap...
By 2021, 500 million sensors are set to be deployed worldwide, nearly 40x as many as exist today. In order to scale fast and keep pace with industry growth, the team at Unacast turned to the public cloud to build the world's largest location data platform with optimal scalability, minimal DevOps, and maximum flexibility. Drawing from his experience with the Google Cloud Platform, VP of Engineering Andreas Heim will speak to the architecture of Unacast's platform and developer-focused processes.
Wooed by the promise of faster innovation, lower TCO, and greater agility, businesses of every shape and size have embraced the cloud at every layer of the IT stack – from apps to file sharing to infrastructure. The typical organization currently uses more than a dozen sanctioned cloud apps and will shift more than half of all workloads to the cloud by 2018. Such cloud investments have delivered measurable benefits. But they’ve also resulted in some unintended side-effects: complexity and risk. ...