Welcome!

Blog Feed Post

From the Intel Newsroom: Improving Parkinson’s Disease Monitoring and Treatment through Advanced Technologies

By

Editor’s note: The use case articulated here is important on its own, but is also one that can be repeatable across multiple other medical research activities and diseases. Thanks Intel and Cloudera for the technology and thanks to the Michael J. Fox Foundation for Parkinson’s Research (MJFF) for what you are doing here.  Be sure to see the video at this link and embedded below. Also, learn more about MJFF  on the WebFacebookTwitterLinkedIn and Pinterest.- bg

From: http://ctolink.us/1oT7zX2

The Michael J. Fox Foundation and Intel Join Forces to Improve Parkinson’s Disease Monitoring and Treatment through Advanced Technologies

 

  • Big data analytics and data from wearable computing offer potential to improve monitoring and treatment of Parkinson’s disease.
  • The Intel-built big data analytics platform combines hardware and software technologies to provide researchers with a way to more accurately measure progression of disease symptoms.

NEW YORK and SANTA CLARA, Calif. (Aug. 13, 2014) — The Michael J. Fox Foundation for Parkinson’s Research (MJFF) and Intel Corporation announced today a collaboration aimed at improving research and treatment for Parkinson’s disease — a neurodegenerative brain disease second only to Alzheimer’s in worldwide prevalence. The collaboration includes a multiphase research study using a new big data analytics platform that detects patterns in participant data collected from wearable technologies used to monitor symptoms. This effort is an important step in enabling researchers and physicians to measure progression of the disease and to speed progress toward breakthroughs in drug development.

“Nearly 200 years after Parkinson’s disease was first described by Dr. James Parkinson in 1817, we are still subjectively measuring Parkinson’s disease largely the same way doctors did then,” said Todd Sherer, PhD, CEO of The Michael J. Fox Foundation. “Data science and wearable computing hold the potential to transform our ability to capture and objectively measure patients’ actual experience of disease, with unprecedented implications for Parkinson’s drug development, diagnosis and treatment.”

“The variability in Parkinson’s symptoms creates unique challenges in monitoring progression of the disease,” said Diane Bryant, senior vice president and general manager of Intel’s Data Center Group. “Emerging technologies can not only create a new paradigm for measurement of Parkinson’s, but as more data is made available to the medical community, it may also point to currently unidentified features of the disease that could lead to new areas of research.”

 

Tracking an Invisible Enemy
For nearly two decades, researchers have been refining advanced genomics and proteomics techniques to create increasingly sophisticated cellular profiles of Parkinson’s disease pathology. Advances in data collection and analysis now provide the opportunity to expand the value of this wealth of molecular data by correlating it with objective clinical characterization of the disease for use in drug development.

The potential to collect and analyze data from thousands of individuals on measurable features of Parkinson’s, such as slowness of movement, tremor and sleep quality, could enable researchers to assemble a better picture of the clinical progression of Parkinson’s and track its relationship to molecular changes. Wearables can unobtrusively gather and transmit objective, experiential data in real time, 24 hours a day, seven days a week. With this approach, researchers could go from looking at a very small number of data points and burdensome pencil-and-paper patient diaries collected sporadically to analyzing hundreds of readings per second from thousands of patients and attaining a critical mass of data to detect patterns and make new discoveries.

MJFF and Intel initiated a study earlier this year to evaluate the usability and accuracy of wearable devices for tracking agreed physiological features from participants and using a big data analytics platform to collect and analyze the data. The participants (16 Parkinson’s patients and nine control volunteers) wore the devices during two clinic visits and at home continuously over four days.

Bret Parker, 46, of New York, is living with Parkinson’s and participated in the study. “I know that many doctors tell their patients to keep a log to track their Parkinson’s,” said Parker. “I am not a compliant patient on that front. I pay attention to my Parkinson’s, but it’s not everything I am all the time. The wearables did that monitoring for me in a way I didn’t even notice, and the study allowed me to take an active role in the process for developing a cure.”

Intel data scientists are now correlating the data collected to clinical observations and patient diaries to gauge the devices’ accuracy, and are developing algorithms to measure symptoms and disease progression.

Later this year, Intel and MJFF plan to launch a new mobile application that enables patients to report their medication intake as well as how they are feeling. The effort is part of the next phase of the study to enable medical researchers to study the effects of medication on motor symptoms via changes detected in sensor data from wearable devices.

Collecting, Storing and Analyzing the Data
To analyze the volume of data, more than 300 observations per second from each patient, Intel developed a big data analytics platform that integrates a number of software components including Cloudera® CDH* — an open-source software platform that collects, stores, and manages data. The data platform is deployed on a cloud infrastructure optimized on Intel® architecture, allowing scientists to focus on research rather than the underlying computing technologies. The platform supports an analytics application developed by Intel to process and detect changes in the data in real time. By detecting anomalies and changes in sensor and other data, the platform can provide researchers with a way to measure the progression of the disease objectively.

In the near future, the platform could store other types of data such as patient, genome and clinical trial data. In addition, the platform could enable other advanced techniques such as machine learning and graph analytics to deliver more accurate predictive models that researchers could use to detect change in disease symptoms. These advances could provide unprecedented insights into the nature of Parkinson’s disease, helping scientists measure the efficacy of new drugs and assisting physicians with prognostic decisions.

Shared Commitment to Open-Access Data
MJFF and Intel share a commitment to increasing the rate of progress made possible by open access to data. The organizations aim to share data with the greater Parkinson’s community of physicians and researchers as well as invite them to submit their own de-identified patient and subject data for analysis. Teams may also choose to contribute de-identified patient data for inclusion in broader, population-scale studies.

The Foundation has previously made de-identified data and bio-samples from its sponsored studies available to qualified researchers, including from individuals with a Parkinson’s-implicated mutation in their LRRK2 gene. MJFF has also opened access to resources from its landmark biomarker study the Parkinson’s Progression Markers Initiative (PPMI) since it launched in 2010. Parkinson’s scientists around the world have downloaded PPMI data more than 235,000 times to date.

Big_Data_Analytics_Platform

 

Read the original blog entry...

More Stories By Bob Gourley

Bob Gourley writes on enterprise IT. He is a founder and partner at Cognitio Corp and publsher of CTOvision.com

Latest Stories
Updating DevOps to the latest production data slows down your development cycle. Probably it is due to slow, inefficient conventional storage and associated copy data management practices. In his session at @DevOpsSummit at 20th Cloud Expo, Dhiraj Sehgal, in Product and Solution at Tintri, will talk about DevOps and cloud-focused storage to update hundreds of child VMs (different flavors) with updates from a master VM in minutes, saving hours or even days in each development cycle. He will also...
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, will highlight the current challenges of these transformative technologies and share strategies for preparing your organization for these changes. This “view from the top” will outline the latest trends and developm...
"There's a growing demand from users for things to be faster. When you think about all the transactions or interactions users will have with your product and everything that is between those transactions and interactions - what drives us at Catchpoint Systems is the idea to measure that and to analyze it," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York Ci...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Discover top technologies and tools all under one roof at April 24–28, 2017, at the Westin San Diego in San Diego, CA. Explore the Mobile Dev + Test and IoT Dev + Test Expo and enjoy all of these unique opportunities: The latest solutions, technologies, and tools in mobile or IoT software development and testing. Meet one-on-one with representatives from some of today's most innovative organizations
SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Fran...
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in Embedded and IoT solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 7-9, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and E...
SYS-CON Events announced today that Linux Academy, the foremost online Linux and cloud training platform and community, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Linux Academy was founded on the belief that providing high-quality, in-depth training should be available at an affordable price. Industry leaders in quality training, provided services, and student certification passes, its goal is to c...
IoT is at the core or many Digital Transformation initiatives with the goal of re-inventing a company's business model. We all agree that collecting relevant IoT data will result in massive amounts of data needing to be stored. However, with the rapid development of IoT devices and ongoing business model transformation, we are not able to predict the volume and growth of IoT data. And with the lack of IoT history, traditional methods of IT and infrastructure planning based on the past do not app...
The unique combination of Amazon Web Services and Cloud Raxak, a Gartner Cool Vendor in IT Automation, provides a seamless and cost-effective way of securely moving on-premise IT workloads to Amazon Web Services. Any enterprise can now leverage the cloud, manage risk, and maintain continuous security compliance. Forrester's analysis shows that enterprises need automated security to lower security risk and decrease IT operational costs. Through the seamless integration into Amazon Web Services, ...
Due of the rise of Hadoop, many enterprises are now deploying their first small clusters of 10 to 20 servers. At this small scale, the complexity of operating the cluster looks and feels like general data center servers. It is not until the clusters scale, as they inevitably do, when the pain caused by the exponential complexity becomes apparent. We've seen this problem occur time and time again. In his session at Big Data Expo, Greg Bruno, Vice President of Engineering and co-founder of StackIQ...