|By William Schmarzo||
|September 25, 2016 11:00 AM EDT||
I was reading an interview with John Krafcik, CEO of Google’s Self-driving Car Project, in the August 8th issue of Bloomberg BusinessWeek. The article referenced a survey by AlixPartners where they found that 73% of people wanted autonomous vehicles. But when people had the option to have a steering wheel in the car, allowing optional full control to the driver, the acceptance rate jumped to 90%. This finding, that people are much more accepting of automation and new ideas when they have the option of control, is totally consistent with what we found with respect to how to deliver big data analytics.
The big data engagements we run for EMC focus on applying predictive and prescriptive analytics to deliver recommendations to help key decision makers become more effective at at their jobs. For example, delivering recommendations to teachers in how to best group their students based upon the subject area, or to mechanics regarding what parts to replace when performing maintenance on a wind turbine, or to physicians regarding what medications and treatments will likely deliver the best results given a patient’s overall wellness, or to appraisers to help them more accurately determine the value of a property, or to an underwriter to help them to determine which loans to accept given a reasonable level of risk, or etc.
But how does one ensure that the business stakeholders, the humans in the process, are accepting of the analytics and recommendations that are being delivered to them? Being right doesn’t necessarily make you persuasive.
Effective Recommendations Put Humans in Control
We learned through several engagements that as we deliver recommendations to business stakeholders or directly to customers involved in the process or decision that we had to provide three options to the humans in order to ensure their buy-in to the analytics. Those three options that we presented to the business stakeholders were:
- They could accept the recommendation and we would measure how effective the outcome was versus the model, or
- They could reject the recommendation and we would measure how effective the outcome was versus the model, or
- They could change the recommendation and we would measure how effective the outcome was versus the model.
Note: in some situations, we also offered the option to select the [MORE] option to get more details (usually presented as interactive charts or tables) in support of the recommendation. But after a while, we found that the users seldom selected that option
For example, the organization may be executing on a customer retention business initiative. The organization could be applying big data analytics to deliver retention offers (new services, lower prices, more features, etc.) to their “high value, at risk” customers based upon the likelihood of the customer’s attrition (Customer Attrition Score) and the customer’s potential lifetime value (Maximum Customer LTV Score). So when Jane Smith calls the call center about a billing issue, the model would look up Jane’s “Customer Attrition Score” and “Maximum Customer LTV Score” to recommend a specific retention offer to the customer service representation.
Let’s say that the data indicates that Jane has a high likelihood of attrition (based upon a change in her usage behaviors and social media sentiment) and that she has a very high “Maximum Customer LTV Score” (based upon both the number of additional services that could be sold to Jane, plus her strong social media following). The prescriptive model may recommend the following retention offer:
[Offer Jane 10% off of her current cable service over the next 3 months]
The call center representative has the options to:
- Accept the recommendation and make that offer to Jane, or
- Reject the recommendation and make no offer to Jane, or
- Change the recommendation based upon the conversation that the Customer Service Rep is having with Jane.
Let’s say that the Customer Service Rep decides that the best offer for Jane (based upon the conversation the Customer Service Rep is having with Jane) is to:
[Offer Jane 50% off new high-speed Internet service over next 6 months]
The customer service rep may have learned from their conversation that Jane’s biggest usage problem was streaming her favorite shows during her weekend binge watching. With this additional insight in hand, and the knowledge from the scores about Jane’s likelihood to attrite and her potential life time value, the customer service rep made the decision to change the recommendation to something that the customer service rep felt was more relevant to the problems that Jane was having.
Test, Measure and Learn for Continuous Model Evolution
In all cases, we want to measure the effectiveness of whatever decisions are ultimately made in order to continuously refine the analytic models and scores. By constantly measuring the effectiveness of the recommendations AND allowing the humans in the process the freedom to test different ideas, the models can learn from the humans.
More importantly, there are always going to be some humans who produce better results than the models due to their experiences, training, and intuition (or in some cases, dumb luck). Humans may be able to react and adjust to new information (coming from the interaction that they are having with the customers) than the time required to update and re-run the models. In the end, involving the humans as a key factor in the analytics process will ensure that models don’t go stale and that the models are constantly improving.
As the BusinessWeek article highlighted, humans will usually be more receptive to new ideas and new technologies if they feel like they are still in control. If you want your decision makers to accept the recommendations of your analytics, then you had better allow the humans an opportunity to provide feedback to the models. This a clear win-win-win for everyone – data scientists who are building the analytic models, business stakeholders who are interacting with the analytic results, and the customers with whom we are trying to provide a differentiated experience.
Sep. 28, 2016 02:30 PM EDT Reads: 2,916
Sep. 28, 2016 02:30 PM EDT Reads: 3,276
Sep. 28, 2016 02:00 PM EDT Reads: 4,726
Sep. 28, 2016 02:00 PM EDT Reads: 4,411
Sep. 28, 2016 01:57 PM EDT Reads: 161
Sep. 28, 2016 01:23 PM EDT Reads: 192
Sep. 28, 2016 01:00 PM EDT Reads: 147
Sep. 28, 2016 01:00 PM EDT Reads: 1,104
Sep. 28, 2016 01:00 PM EDT Reads: 3,546
Sep. 28, 2016 12:30 PM EDT Reads: 5,098
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
Sep. 28, 2016 12:30 PM EDT Reads: 1,043
Sep. 28, 2016 12:30 PM EDT Reads: 4,110
One of biggest questions about Big Data is “How do we harness all that information for business use quickly and effectively?” Geographic Information Systems (GIS) or spatial technology is about more than making maps, but adding critical context and meaning to data of all types, coming from all different channels – even sensors. In his session at @ThingsExpo, William (Bill) Meehan, director of utility solutions for Esri, will take a closer look at the current state of spatial technology and ar...
Sep. 28, 2016 12:20 PM EDT Reads: 193
Fifty billion connected devices and still no winning protocols standards. HTTP, WebSockets, MQTT, and CoAP seem to be leading in the IoT protocol race at the moment but many more protocols are getting introduced on a regular basis. Each protocol has its pros and cons depending on the nature of the communications. Does there really need to be only one protocol to rule them all? Of course not. In his session at @ThingsExpo, Chris Matthieu, co-founder and CTO of Octoblu, walk you through how Oct...
Sep. 28, 2016 12:00 PM EDT Reads: 2,278
The vision of a connected smart home is becoming reality with the application of integrated wireless technologies in devices and appliances. The use of standardized and TCP/IP networked wireless technologies in line-powered and battery operated sensors and controls has led to the adoption of radios in the 2.4GHz band, including Wi-Fi, BT/BLE and 802.15.4 applied ZigBee and Thread. This is driving the need for robust wireless coexistence for multiple radios to ensure throughput performance and th...
Sep. 28, 2016 12:00 PM EDT Reads: 1,658