Welcome!

News Feed Item

GE Oil & Gas Wins Two “Spotlight on New Technology” Awards at OTC 2014

Each year, the Offshore Technology Conference (OTC), taking place May 5-8 in Houston, recognizes innovative technologies that have industry-changing potential in the energy sector. The “Spotlight on New Technology Awards” showcase the latest and most advanced hardware and software solutions that are advancing the offshore exploration and production industry to new levels of safety, productivity and efficiency.

GE Oil & Gas (NYSE:GE) has been honored with two 2014 Spotlight on New Technology Awards for its innovative SeaLytics™ BOP (Blowout Preventer) Advisor and GFI™ Ground Fault Immune Electric Submersible Pump (ESP) Monitoring System products. In keeping with the criteria for winning technologies, GE’s technologies offer broad appeal for the industry, provide significant benefits beyond existing technologies and have been proven through full-scale application or successful prototype testing. GE is one of two companies to receive multiple 2014 Spotlight awards.

SeaLytics BOP Advisor

Technology: GE’s SeaLytics BOP Advisor monitoring and predictive maintenance solution enables drilling contractors to monitor the performance of BOPs and plan their maintenance by using predictive analytics based on actual component performance data.

Customer Benefit: When a BOP is offline for unplanned service, the cost to the drilling contractor can be significant, both in terms of idled crews and missed opportunities. A lengthy downtime event can ripple through a rig’s drilling schedule well beyond the initial system outage. GE’s SeaLytics BOP Advisor is designed to improve BOP system uptime and reduce unnecessary maintenance, which leads to better cost forecasting, all of which provide significant performance benefits to the user.

SeaLytics BOP Advisor enables jackup and drillship contractors to move from a “when it breaks, fix it” approach to a predictive maintenance planning mode. The technology communicates beyond the drilling operator’s cockpit, or “doghouse,” letting the rig share its status with operations leaders located onshore or with drilling teams on other vessels. Because SeaLytics BOP Advisor can identify components that may need service in advance, the contractor can service equipment when scheduling opportunities arise.

GE Cross-Business Technology Sharing: SeaLytics BOP Advisor was the first product to be developed at GE’s new Software Center of Excellence (COE) in San Ramon, California. The Software COE brings together the biggest ideas and latest technologies from across GE, enabling products like SeaLytics BOP Advisor to take full advantage of the company’s development expertise and broad investment across all of GE in Industrial Internet technologies and solutions.

Why It Matters: “SeaLytics BOP Advisor collects and analyzes high-fidelity information about an operator’s complete BOP system, enabling the operator to predictively manage equipment performance,” said Chuck Chauviere, president of drilling systems—GE Oil & Gas. “Most importantly, SeaLytics BOP Advisor data empowers the offshore driller with critical information needed to maintain a high degree of safety to protect the crew and the environment.”

GFI Ground Fault Immune ESP Monitoring Gauge

Technology: With conventional monitoring systems for ESPs, when a ground fault occurs on the ESP power cable, the gauge’s power supply is cut off. Although the pump continues to run, its performance is no longer monitored, thus reducing the operator’s ability to effectively monitor activities and optimize production. To address this decades-old problem, GE’s Zenith GFI ESP Monitoring System is the first ground fault-immune gauge that is designed not to be disturbed by cable ground faults, allowing operators to manage against production losses and equipment failure through continued, reliable data delivery despite ground fault conditions.

GE’s Zenith GFI solution includes a unique new power and communications system that enables the gauge to operate with imperfect insulation on the ESP cable. The new system also provides faster data sampling than alternative gauges and delivers ESP cable condition measurements in addition to standard industry parameters.

Customer Benefit: This pioneering technology is set to have a significant impact on the artificial lift market, with an average of 15 percent of existing ESP gauges losing production data due to ground faults within the first year of operation, resulting in up to a 25 percent reduction in fluid output compared to pumps optimized with a live downhole gauge. A 25 percent loss from a well producing 800 BPD at $100 per barrel would equate to $7.3 million per year in lost revenue from a single well. The GE Zenith GFI system enables operators to help avoid such losses.

GE Cross-Business Technology Sharing: The Zenith technology and its experts joined GE Oil & Gas as part of GE’s July 2013 acquisition of Lufkin Industries.

Why It Matters: “All too often in ESP operations, a ground fault will cause downhole monitoring systems to fail and leave operators running blind,” said Dave Shanks, development manager for GE’s Zenith technologies—GE Oil & Gas. “This can result in up to a 25 percent reduction in fluid output when compared to a pump optimized with a live downhole gauge, resulting in a significant loss of production. Our Ground Fault Immune gauge offers a monitoring solution that is designed not to be disturbed by these types of faults for the first time.”

About GE Oil & Gas

GE Oil & Gas works on the things that matter in the oil and gas industry. In collaboration with our customers, we push the boundaries of technology to bring energy to the world. From extraction to transportation to end use, we address today's toughest challenges in order to fuel the future. Follow GE Oil & Gas on Twitter @GE_OilandGas.

More Stories By Business Wire

Copyright © 2009 Business Wire. All rights reserved. Republication or redistribution of Business Wire content is expressly prohibited without the prior written consent of Business Wire. Business Wire shall not be liable for any errors or delays in the content, or for any actions taken in reliance thereon.

Latest Stories
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, shared examples from a wide range of industries – including en...
In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean sep...
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
Detecting internal user threats in the Big Data eco-system is challenging and cumbersome. Many organizations monitor internal usage of the Big Data eco-system using a set of alerts. This is not a scalable process given the increase in the number of alerts with the accelerating growth in data volume and user base. Organizations are increasingly leveraging machine learning to monitor only those data elements that are sensitive and critical, autonomously establish monitoring policies, and to detect...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. Jack Norris reviews best practices to show how companies develop, deploy, and dynamically update these applications and how this data-first...
Intelligent Automation is now one of the key business imperatives for CIOs and CISOs impacting all areas of business today. In his session at 21st Cloud Expo, Brian Boeggeman, VP Alliances & Partnerships at Ayehu, will talk about how business value is created and delivered through intelligent automation to today’s enterprises. The open ecosystem platform approach toward Intelligent Automation that Ayehu delivers to the market is core to enabling the creation of the self-driving enterprise.
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We're here to tell the world about our cloud-scale infrastructure that we have at Juniper combined with the world-class security that we put into the cloud," explained Lisa Guess, VP of Systems Engineering at Juniper Networks, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Historically, some banking activities such as trading have been relying heavily on analytics and cutting edge algorithmic tools. The coming of age of powerful data analytics solutions combined with the development of intelligent algorithms have created new opportunities for financial institutions. In his session at 20th Cloud Expo, Sebastien Meunier, Head of Digital for North America at Chappuis Halder & Co., discussed how these tools can be leveraged to develop a lasting competitive advantage ...
WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web communications world. The 6th WebRTC Summit continues our tradition of delivering the latest and greatest presentations within the world of WebRTC. Topics include voice calling, video chat, P2P file sharing, and use cases that have already leveraged the power and convenience of WebRTC.
"We're a cybersecurity firm that specializes in engineering security solutions both at the software and hardware level. Security cannot be an after-the-fact afterthought, which is what it's become," stated Richard Blech, Chief Executive Officer at Secure Channels, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.