Monday, June 13, 2016

If Computers Could Read Your Customer Survey Responses…

If computers could read your customer survey responses …

Written by: Peter Elliot

 

            Well, it all depends what you mean by ‘read’. Such a small word that implies so much based upon context. If you told me you read this article, it means you understood it. When a machine ‘reads’ a file, it typically means load and scan. When a machine ‘reads’ a survey response, it scans it, and applies predetermined algorithms to the words. It cannot possibly understand the meaning of the text; if it did, it truly would be artificially intelligent             

            A few years ago I led a project to analyse the written interactions between support agents and customers to gain insights into the reason for the call. We set out to analyse a corpus of text-based interactions between customers and service personnel, derive the topics and themes of the discussions, and use them to understand more about the company’s products and why customers need to call regarding their use. Thanks to a great team of analysts and data scientists, we built a prototype and celebrated a 70% success rate. While this may not seem worth celebrating, in the world of text analysis it’s quite good.

             The methodology is complex, but here are the basic steps. The text is cleaned by removing words (stop words) that have less meaning, such as pronouns. Similar words, such as plurals, are merged by stemming (shortening) them. What is left is a dictionary of meaningful words which can then be analysed. Clustering algorithms then scan for words that commonly occur together, and these clusters are surfaced to a SME (Subject Matter Expert) who answers the question ‘ if you see these words together in a piece of text what topic would you think is being discussed?’ Their answer becomes a document tag, and common tags can be counted, and graphically displayed. To test the validity of the output, a sample of the tagged documents are read by a person who compares the tag to the text, and notes whether the tag correctly describes the subject. This is how we assessed our 70% success rate.

            We humans automatically make assumptions when reading text, and one of them is context. We know up front, for instance, whether the conversation is about a disk storage unit or a fridge freezer. Our SME automatically assumes this knowledge when assigning a tag. Machines know nothing about context unless we provide that information.
While the SME is asked to provide the understanding, the machine can apply it methodically to large numbers of conversations very quickly. The next nut to crack is to get the machine, based on past experience, to learn to apply the SME understanding and create the tag. One way this can be done is to record word clusters and associated tags in a database, and use a search algorithm, however it’s important to search by context to get meaningful results. A start has been made by some MiT researchers who have assembled an open database of word associations and topics called ConceptNet that can be looked up by other applications such as Luminoso, which uses ConceptNet to infer topics and themes without the need for human intervention.

            Companies such as Clarabridge and Medallia combine many of these techniques to turn pages of text, such as TripAdvisor comments, into quantifiable terms. Social media tools such as Attensity use similar tools to trawl through tweets and facebook posts to provide insights into what customers are saying on social media. Their products can also determine the sentiment of the conversation by looking for key words and other words they occur with. Bill Inmon’s Forest Rim Technology Textual ETL product uses a relational database to align the results of context, ontology, taxonomy and text processing techniques in a form that can feed directly to a visualisation tool such as Tableau or Qlikview.

             If computers could read and understand text, and tell us what the text was about, then they would be as intelligent as we are. Furthermore Stephen Hawking wrote “Success in creating AI would be the biggest event in human history”. Reading and understanding text is an essentially human characteristic. But there are many applications where it would be very useful to read text at machine speeds and be advised about the topics within it, and a 70% or higher success rate still provides useful insights that would otherwise require lengthy, tedious and maybe error-prone reading and notation by individuals, where the rate of return for the investment probably would not be acceptable.           

             My particular pursuit of text-reading technology arose from a desire to understand customers, why they call, and whether they are satisfied with the products they have bought. Consider the possibilities as this technology is enhanced, and that 70% success rate improves. In the Contact Centre we could obtain real-time customer satisfaction scores as our agent’s conversations are turned into text via text-to-speech applications, and analysed to indicate today’s satisfaction ratings and hot topics. Product issues could be immediately picked up and acted upon before other customers fall into the trap. Text Analysis technology is still very young, and as it develops has huge potential for improving Customer Experience measurement and analysis.

Peter Elliot is an experienced and professional consultant. Peter and his peers at The Taylor Reach Group, assist companies and organizations to overcome business, strategic and operational challenges in their call, contact center and customer facing organizations



via The Taylor Reach Group Inc. http://ift.tt/1tmz7cc

No comments: