Welcome!

Related Topics: @DXWorldExpo, Cognitive Computing , Machine Learning

@DXWorldExpo: Article

Patent Data Quality | @CloudExpo #BigData #Analytics #AI #MachineLearning

Is clean data a pipe dream?

The United States Patent and Trademark Office (USPTO) recently announced an expansion of PatentsView, its visualization tool for US patents. First launched a few years ago, the intent behind the tool was to make 40 years of patent filing data available for free to those interested in examining "the dynamics of inventor patenting activity over time." In spite of being limited to patents (not applications) and with a focus only on the US, it offers some interesting visualizations around locations and citations.

In a blog post last month, USPTO director Michelle Lee said the PatentView tool is based on "the highest-quality patent data available," connecting 40 years' worth of information about inventors, their organizations, and their locations in unprecedented ways. The newly revamped interface presents three user-friendly starting points - relationship, locations, and comparison visualizations - which allow for deeper exploration and detailed views. However, through no fault of their own, the USPTO dataset is rife with spelling errors, doesn't reflect patent reassignments, and doesn't resolve company subsidiaries or acquisitions.

This issue is not unique to the USPTO. Other PTO offices around the world face similar barriers to presenting "clean" data. The first issue, spelling errors, merely reflects the fact that assignee information (among other fields like inventor names) is manually entered and hence prone to error and inconsistency. For example, "International Business Machines" has been spelled 1,200 different ways as a patent assignee over the last two decades in the USPTO data set.

In addition, PTO data doesn't get corrected or updated based on later corrections or patent reassignments. For example, patent US8176440 was originally - and incorrectly - assigned to Silicon Labs. My company, Innography, filed a certificate of correction to update the assignment, yet the USPTO data and PatentsView still don't reflect this. In fact, Innography research shows that nearly 20 percent of US patents are reassigned in their lifetimes, translating into a significant number of company portfolio errors based on this factor alone.

Finally, PTO data also doesn't reflect when companies purchase each other, when there's a spinoff, or when a subsidiary files patents. Microsoft, for example, now owns all LinkedIn's patents, even if the reassignments haven't been processed.

As a result, PTO data falls far short of reflecting reality, where patents and companies are bought and sold every day, and where data-entry errors exist and are corrected. The accuracy of the data is very low when it comes to representing company patent portfolios in the real world.

The Cost of Free Data
The USPTO aims to increase the transparency of patenting and invention processes. But if the quality of data and search results is questionable, what good is it to IP practitioners?

There is rich information available through the patenting process, including economic research, prior-art searching, and discovery of broader trends around filing patterns. However, it was never intended to be used as-is to inform strategic business decisions such as in and out licensing, merger and acquisition activities, or portfolio pruning and maintenance decisions.

It makes sense for PTOs to offer their data for free as a way to engage the community's interest in patenting processes. However, too many lightweight patent analytics tools use this flawed data verbatim to tout their "data quality" to IP professionals.

Many patent analyses start with a company's patent portfolio, such as competitive benchmarking, acquisition analysis, and negotiation preparation. In addition, just about every board-level question about patents requires accurate patent ownership information: "Are we ahead of or behind this competitor?" "What companies should we be worried about in this technology area?"

Poor data quality makes it difficult, if not impossible, to answer those questions accurately. To create the most accurate data set possible, companies must use other sources of information to crosscheck and improve patent data accuracy.

Innography data scientists process more than 2,000 company acquisitions annually, and our user base suggests another 5,000 updates each year. As a result, Innography has created more than 10 million data-correction rules over the last decade, which are continuously updated via machine learning and crowdsourcing.

Company leaders must be able to use patent reports to assess market opportunities and make strategic business decisions. This requires an IP analytics solution that reflects real-world changes, and doesn't rely on poor data quality from outdated PTO assignee information.

More Stories By Tyron Stading

Tyron Stading is president and founder of Innography, and chief data officer for CPA Global. He has been named one of the “World’s Leading IP Strategists" by IAM, and one of National Law Journal's "50 Intellectual Property Trailblazers & Pioneers". Before Innography, Tyron was an IBM worldwide industry solutions manager in the telecommunications and utilities sector, and worked at several start-ups focused on mobile communications and networks security. He has published multiple research papers and filed more than three dozen patents. Tyron has a BS in Computer Science from Stanford University and an MS in Technology Commercialization from The University of Texas.

Latest Stories
Traditional on-premises data centers have long been the domain of modern data platforms like Apache Hadoop, meaning companies who build their business on public cloud were challenged to run Big Data processing and analytics at scale. But recent advancements in Hadoop performance, security, and most importantly cloud-native integrations, are giving organizations the ability to truly gain value from all their data. In his session at 19th Cloud Expo, David Tishgart, Director of Product Marketing ...
Enterprises are universally struggling to understand where the new tools and methodologies of DevOps fit into their organizations, and are universally making the same mistakes. These mistakes are not unavoidable, and in fact, avoiding them gifts an organization with sustained competitive advantage, just like it did for Japanese Manufacturing Post WWII.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Le...
Transformation Abstract Encryption and privacy in the cloud is a daunting yet essential task for both security practitioners and application developers, especially as applications continue moving to the cloud at an exponential rate. What are some best practices and processes for enterprises to follow that balance both security and ease of use requirements? What technologies are available to empower enterprises with code, data and key protection from cloud providers, system administrators, inside...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
"Calligo is a cloud service provider with data privacy at the heart of what we do. We are a typical Infrastructure as a Service cloud provider but it's been designed around data privacy," explained Julian Box, CEO and co-founder of Calligo, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and G...
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will d...
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.