Welcome!

Blog Feed Post

Data and Economics 101

As more organizations try to determine where best to deploy their limited budgets to support data and analytics initiatives, they realize a need to ascertain the financial value of their data and analytics – which means basic economic concepts are coming into play.  While many of you probably took an economics class in college not too long ago, some more “seasoned” readers may be rusty.

The starting point for this topic began with a blog that I wrote several months ago titled “Determining the Economic Value of Data” and this key observation that started that conversation:

Data is an unusual currency. Most currencies exhibit a one-to-one transactional relationship. For example, the quantifiable value of a dollar is considered to be finite – it can only be used to buy one item or service at a time, or a person can only do one paid job at a time. But measuring the value of data is not constrained by those transactional limitations. In fact, data currency exhibits a network effect, where data can be used at the same time across multiple use cases thereby increasing its value to the organization. This makes data a powerful currency in which to invest.

So to better understand how economics can help determine the value of an organization’s data and analytics, I sought the help of an old friend who is passionate about applying economics in business. Vince Sumpter (Twitter: @vsumpter) helped deepen my understanding of some core concepts of economics, and consider where and how these economic concepts play in a business world that is looking for ways to determine the financial – or economic – value of their data and analytics.

The economic concepts that seem to have the most bearing on determining the economic value of data (and the resulting analytics) that this blog will cover include:

  • Scarcity
  • Efficiency
  • Postponement Theory
  • Multiplier Effect
  • Capital

It is our hope that this blog fuels some creative thinking and debate as we contemplate how organizations need to apply basic economic concepts to these unusual digital assets – data and analytics.

Defining Economics

I found the below two definitions of “economics” the most useful:

  • Economics is the science that deals with the production, distribution, and consumption of commodities. Economics is generally understood to concern behavior that, given the scarcity of means, arises to achieve certain ends[1].
  • Economics is a broad term referring to the scientific study of human action, particularly as it relates to human choice and the utilization of scarce resources[2].

I pulled together what I felt were some of the key phrases to come up with the following definition of the “economics of data” for purposes of this blog:

Economics of Data:  The science of human choice and behaviors as they relate to the production, distribution and consumption of scarce data and analytic resources.

Economics is governed by the law of supply and demand that dictates the interaction between the supply of a resource and the demand for that resource. The law of supply and demand defines the effect that product or service availability and the demand for that product or service has on price. Generally, a low supply and a high demand increases price, and in contrast, the greater the supply and the lower the demand, the lower the price tends to fall.

Now, we will explore the most relevant economic concepts in context to the Economics of Data.

Scarcity

Scarcity refers to limitations—insufficient resources, goods, or abilities to achieve the desired ends. Figuring out ways to make the best use of scarce resources or find alternatives is fundamental to economics.

Scarcity Ramifications

Figure 1: Scarcity Ramifications

Scarcity is probably the heart of the economics discussion and ties directly to the laws of supply and demand.  Organizations do not have unlimited financial, human or time resources, consequently as we discussed previously, organizations must seek to prioritize their data and analytic resources against those best opportunities.  Scarcity at its heart forces organizations to do two things that they do not do well:  prioritize and focus (see “Big Data Success: Prioritize ‘Important’ Over ‘Urgent’”).

Scarcity plays out in the inability, or the unwillingness, for the organization to share all of its data across all of its business units.  For some business units, scarcity drives their value to the organization; that is, he who owns the data owns the power.  This short-sighted mentality manifests itself across organizations in the way of data silos and IT “Shadow Spend.”  For example, if you are a Financial Services organization trying to predict your customers’ lifetime value, having analytics that optimize individual business units (checking, savings, retirement, credit cards, mortgage, car loans, wealth management) without seeking to optimize the larger business objective (predict customer lifetime value) could easily lead to suboptimal or even wrong decisions about which customers to prioritize with what offers at what times through what channels.

Scarcity has the biggest impact on the prioritization and optimization of scarce data and analytic resources including:

  • Are your IT resources focused on capturing or acquiring the most important data in support of the organization’s key business initiatives?
  • Are your data science resources focused on the development of the top priority analytics?
  • Does your technical and cultural environment support and even reward the capture, refinement, and re-use of the analytic results across multiple business units?

Consequently, the ability to prioritize (see “Prioritization Matrix: Aligning Business and IT On The Big Data Journey”) and carefully balance the laws of supply and demand are critical to ensure not only that your data and analytics resources are being prioritized against the “optimal” projects.

Postponement Theory

Postponement is a decision to postpone a decision (which is itself a decision). It can occur as one party seeks to either gain additional information about the decision and/or to delay for better terms from the other party.

Postponement has the following ramifications from an economics of data perspective:

  • Case #1: Organizations may decide to postpone a decision in order to gather more data and/or build more accurate analytics in order to dramatically improve the probability of making a “better” decision
  • Case #2: People and organizations may postpone a decision in order to get better terms especially given certain time constraints (e.g., car dealers get very aggressive with their terms near the end of the quarter)

While Case #2 may not have an impact on the economics of your organization’s data and analytics, Case #1 has direct impact.  In order to make a postponement decision, organizations need to understand:

  • What is the estimated effectiveness of the current decision given Type I/Type II decision risks (where a Type I error is a “False Positive” error and a Type II error is a “False Negative error)? See “Understanding Type I and Type II Errors” for more details on Type I/Type II errors.
  • What data might be needed to improve the effectiveness of that decision?
  • How much more accurate can the decision be made given these new data sources and additional data science time?
  • What are the risks of Type I/Type II errors (the costs associated with making the wrong decision)?

Efficiency

Efficiency is a relationship between ends and means. When we call a situation inefficient, we are claiming that we could achieve the desired ends with less means, or that the means employed could produce more of the ends desired.

Data and analytics play a major role driving efficiency improvements by identifying operational deficiencies and proposing recommendations (prescriptive analytics) on how to improve operational efficiencies.

The aggregation of the operational insights gained from efficiency improvement might lead to new monetization opportunities in enabling the organization to aggregate usage patterns across all customers and business constituents.  For example, organizations could create benchmarks, share, and index calculations that customers and partners could use to measure their efficiencies and create goals around efficiency optimization from the aggregated performance data.

Multiplier Effect

The multiplier effect refers to the increase in final income arising from any new injection of spending. The size of the multiplier depends upon household’s marginal propensity to consume (MPC), or the marginal propensity to save (MPS).

The Multiplier Effect is one of the most important concepts developed by J.M. Keynes to explain the determination of income and employment in an economy. The theory of multiplier has been used to explain the cumulative upward and downward swings of the trade cycles that occur in a free-enterprise capitalist economy. When investment in an economy rises, it can have a multiple and cumulative effect on national income, output and employment.

The multiplier effect is, therefore, the ratio of increment in income to the increment in investment.

When applied to our thinking about the Economics of Data, the multiplier effect embodies the fact that our efforts to develop a new data source, or derived analytic measure, could have that same multiplier effect if the new data/analytics were to be leveraged beyond the initial project.

For example, when CPG manufacturers worked with retailers to implement the now ubiquitous UPC standard in the early 1980’s, their primary motivation was a desire to drive more consistent pricing at the cash register… Few imagined the knock-on benefits that would accrue by now having much deeper understanding of actual product movement through the supply chain…let alone the shift in balance-of-power that subsequently ensued from CPG Manufacturer to today’s Retailers!

Multiplier EffectFigure 2:  Multiplier Effect

Price Elasticity

Price elasticity of demand is the quantitative measure of consumer behavior that indicates the quantity of demand of a product or service depending on its increase or decrease in price. Price elasticity of demand can be calculated by the percent change in the quantity demanded by the percent change in price.

In today’s big data environment, the price of data science resources (i.e. their salaries) seems almost price inelastic (inelastic describes the situation in which the quantity demanded or supplied of a good or service is unaffected when the price of that good or service changes).  That means that the demand for data science resources is only slightly affected when the price of data science resources increases.

This price inelasticity of data science resources can only be addressed in a few ways:  train (and really certify) more data scientists or dramatically improve the capabilities and ease-of-use of data science tools.

However, there is another option:  train your business users to “think like a data scientist.”  The key to this process is training your business users to embrace the power of “might” in collaborating with the data science team to identify those variables and metrics that might be better predictors of performance.  We have now seen across a number of projects how coupling the creative thinking of the business users with the data scientists can yield dramatically better predictions (see forthcoming blog:  “Data Science: Identifying Variables That Might Be Better Predictors”).

The “Thinking Like A Data Scientist” process will uncover a wealth of new data sources that might yield better predictors of performance.  It is then up to the data science team to employ their different data transformation, data enrichment and analytic algorithms to determine which variables and metrics are better predictors of performance.

Capital

Capital is already-produced durable goods and assets, or any non-financial asset that is used in production of goods or services.  Capital is one of three factors of production, the others being land and labor.

Adam Smith defined capital as “that part of a man’s stock which he expects to afford him revenue”.  I like Adam Smith’s definition because the ultimate economic goal of data and analytics is to “afford organizations revenue.”  And while it may be possible to generate that revenue through the sale of data and analytics, for most organizations data and analytics as capital get converted into revenue in four ways:

  • Driving the on-going optimization of key business processes (e.g., reducing fraud by 3% annually, increasing customer retention 2.5% annually)
  • Reducing exposure to risk through management of security, compliance, regulations, and governance, to avoid security breaches, litigation, fines, theft etc. to build customer trust and loyalty while ensuring business continuity and availability.
  • Uncovering new revenue opportunities through superior customer, product and operational insights that can identify unmet customer, partner and market needs
  • Delivering a more compelling, more prescriptive customer experience that both increases customer satisfaction and advocacy, but also increases the organization’s success in recommending new products and services to the highest qualified, highest potential customers and prospects

Probably the most important economic impact on data and analytics is the role of human capital.  Economists regard expenditures on education, training, and medical care as investments in human capital. They are called human capital because people cannot be separated from their knowledge, skills, health, or values in the way they can be separated from their financial and physical assets.  These human investments can raise earnings, improve health, or add to a person’s good habits over one’s lifetime.  But maybe more importantly, an organization’s human capital can be transformed to “think differently” about the application of data and analytics to power the organization’s business models.

Summary

As my friend Jeff Abbott said after reviewing this blog: “What did I do wrong to have to review this blog?”

While the economic concepts discussed in this blog likely do not apply to your day-to-day jobs, more and more I expect that the big data (data and analytics) conversation will center on basic economic concepts as organizations seek to ascertain the economic value of their data and analytics. Data and analytics exhibit unusual behaviors from an asset and currency perspective, and applying economic concepts to these behaviors may help organizations as they seek to prioritize and optimize their data and analytic investments.

So, sorry for bringing back bad college memories about your economics classes, but hey, no one said that big data was going to be only fun!

 

Sources:

http://www.econlib.org/library/Topics/HighSchool/KeyConcepts.html
http://www.econlib.org/library/Topics/HighSchool/Scarcity.html
http://www.economicsdiscussion.net/keynesian-economics/keynes-theory/keynes-theory-of-investment-multiplier-with-diagram/10363
http://www.tutor2u.net/economics/reference/multiplier-effect
http://www.investopedia.com/university/economics/economics3.asp
http://www.econlib.org/library/Topics/HighSchool/ElasticityofDemand.html
http://www.econlib.org/library/Topics/HighSchool/HumanCapital.html
[1] http://www.dictionary.com/browse/economics
[2] http://www.investopedia.com/terms/e/economics.asp?lgl=no-infinite

The post Data and Economics 101 appeared first on InFocus Blog | Dell EMC Services.

Read the original blog entry...

More Stories By William Schmarzo

Bill Schmarzo, author of “Big Data: Understanding How Data Powers Big Business”, is responsible for setting the strategy and defining the Big Data service line offerings and capabilities for the EMC Global Services organization. As part of Bill’s CTO charter, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He’s written several white papers, avid blogger and is a frequent speaker on the use of Big Data and advanced analytics to power organization’s key business initiatives. He also teaches the “Big Data MBA” at the University of San Francisco School of Management.

Bill has nearly three decades of experience in data warehousing, BI and analytics. Bill authored EMC’s Vision Workshop methodology that links an organization’s strategic business initiatives with their supporting data and analytic requirements, and co-authored with Ralph Kimball a series of articles on analytic applications. Bill has served on The Data Warehouse Institute’s faculty as the head of the analytic applications curriculum.

Previously, Bill was the Vice President of Advertiser Analytics at Yahoo and the Vice President of Analytic Applications at Business Objects.

Latest Stories
SYS-CON Events announced today that CollabNet, a global leader in enterprise software development, release automation and DevOps solutions, will be a Bronze Sponsor of SYS-CON's 20th International Cloud Expo®, taking place from June 6-8, 2017, at the Javits Center in New York City, NY. CollabNet offers a broad range of solutions with the mission of helping modern organizations deliver quality software at speed. The company’s latest innovation, the DevOps Lifecycle Manager (DLM), supports Value S...
NHK, Japan Broadcasting, will feature the upcoming @ThingsExpo Silicon Valley in a special 'Internet of Things' and smart technology documentary that will be filmed on the expo floor between November 3 to 5, 2015, in Santa Clara. NHK is the sole public TV network in Japan equivalent to the BBC in the UK and the largest in Asia with many award-winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology and will be covering @ThingsExpo Silicon Val...
Join IBM November 2 at 19th Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how to go beyond multi-speed it to bring agility to traditional enterprise applications. Technology innovation is the driving force behind modern business and enterprises must respond by increasing the speed and efficiency of software delivery. The challenge is that existing enterprise applications are expensive to develop and difficult to modernize. This often results in what Gartner calls ...
Translating agile methodology into real-world best practices within the modern software factory has driven widespread DevOps adoption, yet much work remains to expand workflows and tooling across the enterprise. As models evolve from pockets of experimentation into wholescale organizational reinvention, practitioners find themselves challenged to incorporate the culture and architecture necessary to support DevOps at scale. In his session at @DevOpsSummit at 20th Cloud Expo, Anand Akela, Senior...
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Analytic. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
The age of Digital Disruption is evolving into the next era – Digital Cohesion, an age in which applications securely self-assemble and deliver predictive services that continuously adapt to user behavior. Information from devices, sensors and applications around us will drive services seamlessly across mobile and fixed devices/infrastructure. This evolution is happening now in software defined services and secure networking. Four key drivers – Performance, Economics, Interoperability and Trust ...
NHK, Japan Broadcasting, will feature the upcoming @ThingsExpo Silicon Valley in a special 'Internet of Things' and smart technology documentary that will be filmed on the expo floor between November 3 to 5, 2015, in Santa Clara. NHK is the sole public TV network in Japan equivalent to the BBC in the UK and the largest in Asia with many award-winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology and will be covering @ThingsExpo Silicon Val...
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound e...
SYS-CON Events announced today that Twistlock, the leading provider of cloud container security solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Twistlock is the industry's first enterprise security suite for container security. Twistlock's technology addresses risks on the host and within the application of the container, enabling enterprises to consistently enforce security policies, monitor...
Multiple data types are pouring into IoT deployments. Data is coming in small packages as well as enormous files and data streams of many sizes. Widespread use of mobile devices adds to the total. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will look at the tools and environments that are being put to use in IoT deployments, as well as the team skills a modern enterprise IT shop needs to keep things running, get a handle on all this data, and deli...
Automation is enabling enterprises to design, deploy, and manage more complex, hybrid cloud environments. Yet the people who manage these environments must be trained in and understanding these environments better than ever before. A new era of analytics and cognitive computing is adding intelligence, but also more complexity, to these cloud environments. How smart is your cloud? How smart should it be? In this power panel at 20th Cloud Expo, moderated by Conference Chair Roger Strukhoff, pane...
With billions of sensors deployed worldwide, the amount of machine-generated data will soon exceed what our networks can handle. But consumers and businesses will expect seamless experiences and real-time responsiveness. What does this mean for IoT devices and the infrastructure that supports them? More of the data will need to be handled at - or closer to - the devices themselves.
Building a cross-cloud operational model can be a daunting task. Per-cloud silos are not the answer, but neither is a fully generic abstraction plane that strips out capabilities unique to a particular provider. In his session at 20th Cloud Expo, Chris Wolf, VP & Chief Technology Officer, Global Field & Industry at VMware, will discuss how successful organizations approach cloud operations and management, with insights into where operations should be centralized and when it’s best to decentraliz...
In recent years, containers have taken the world by storm. Companies of all sizes and industries have realized the massive benefits of containers, such as unprecedented mobility, higher hardware utilization, and increased flexibility and agility; however, many containers today are non-persistent. Containers without persistence miss out on many benefits, and in many cases simply pass the responsibility of persistence onto other infrastructure, adding additional complexity.