Vishwa Kolla, AVP, Head of Advanced Analytics, John Hancock Insurance
In 1997, Deep Blue beat then reigning chess champion, Gary Kasprov. In 2011, Watson beat then reigning Jeopardy champion, Ken Jennings. In 2016, Alphago beat then reigning Go champion, Lee Sedol. It is not surprising to see why AI is on top of everyone’s mind. With Digital Assistants (Siri, Alexa, Google Home, Cortana), chat bots, self-driving cars, face passwords, algorithmic trading, personalized news feeds, personalized product and movie recommendations and for the techno-geeks Eugene Goostman who arguably passed the Turing test–it appears that AI is about to replace humans. The thought does have merit and is well grounded. AI does really well on countless highly specific use cases. That said, we are very far away from building a general purpose AI engine (similar to that of a human’s ability).
Broadly speaking AI (and associated automation) gets us economies of scale. If there is a well-defined (and repetitive) task, it can likely be automated. While one could build a system to beat Ken Jennings, one cannot take the same system to play Go. One would have to build a totally different system. Humans, on the other hand, get us economies of scope. As humans, we can do a variety of tasks, learn on the fly, process on the go and have access to a seemingly infinite memory. As you are reading this article, you probably are multi-tasking, processing information, jogging your memory while associating words, phrases, contexts and scenes from movies.
Although AI is ubiquitous only now, it was founded as an academic discipline and as a branch of Computer Science, 61 years back, in 1956. Though related, AI methods differ from traditional machine learning (ML) methods. Traditional ML methods generally include Supervised and Un-supervised learning, Dimensionality Reduction, Anomaly Detection and Non-linear Learning. AI use cases include image recognition (facial passwords on iPhone X), reasoning (medical diagnosis), building knowledge banks (Watson), working with speech and natural language constructs of processing, generation and translations (Digital Assistants). In both realms, the list is not nearly exhaustive, but is illustrative.
Today, if there is one thing there is no dearth of, it is data. It is now, not about getting access to data, but being able to manage it and use it well.
Firms that have embedded data and algorithms (ML and AI) into everyday decision making have enjoyed sustained Cumulative Average Growth Rate (CAGR) of 7-12 percent over 1.3 decades starting in 2001. Their CAGRs handily beat respective industry averages of 1-5 percent. If a firm is not thinking along these lines, it may be leaving a lot of money on the table and might suffer from an obsolescence risk. A few thoughts and actions can help a firm along this journey.
Establishing a baseline and making incremental improvements will help a firm solve the right problems well as opposed to solving any problem
First: The barrier to entry to embedding AI is lower than you might think: While building AI applications seems so far out and complex, analytics shops more or less have an instant access to data, large amounts of compute (potentially in the cloud) and abilities (in the form of open source packages). Some freely available and highly used packages are from Google (Tensorflow), H20.ai, Scikit-learn etc. There are a lot of commercially available packages as well. It is not a stretch for a traditional analytics shop to start learning and implementing these (seemingly) new methods. The costs to getting started are rather low. To get good at them, a conscious investment is paramount.
Second: Start with an existing process that can benefit from augmenting with data and algorithms: There are several areas that might fit these criteria. One potential area is along the lines of managing a customer value chain. Take a firm in the life insurance industry as an example.
In the prospecting realm, instead of running a traditional spray and pray marketing campaign, it is much more cost effective to be targeted. By building and leveraging propensity models (to respond, to apply and to qualify) one could potentially reduce campaign costs by a half or a third.
In the acquisition realm, consumers generally say that they prefer lower prices, but when we observe their actions closely, they typically overweigh convenience over price. By building interfaces that make it convenient to submit applications (bots and web-apps), automatically request necessary evidences (triage), flag potential misrepresentation (fraud detection), underwrite (risk classification) and manage the buying process seamlessly, the drop-out of consumers from the traditional marketing funnel (of awareness, consideration and purchase) will likely be low. Absence of such a process, a firm might risk feeding harnessed leads to its competition.
In the nurture realm, with the advent of telematics, activity and bio-metric trackers are becoming ubiquitous. Data from these devices can be very helpful in both understanding customer behaviors and their needs. Sophisticated algorithms can then help with bubbling up insights, crafting nudge-recommendations to further enrich the customer experience, and improve stickiness thus likely improving profitability.
Third: Overweigh sophistication of implementation over sophistication of algorithms: Establishing a baseline and making incremental improvements will help a firm solve the right problems well as opposed to solving any problem. Such a construct will also help with quantifying the value that investments in AI and Analytics practices bring to the table while giving tangible and challenging goals to analytics professionals.
In a nutshell, AI and Analytics can be embedded into a lot of areas in everyday decision making. Doing so can help a firm further its customer-centricity goals and likely improve profitability. Late adopters might face an obsolescence risk and will likely have to work harder to preserve their best-customer mix.