09 Jul

Warwick Analytics in the Press: Connections Magazine

Warwick Analytics has been featured as the cover story in the July/August issue of Connections Magazine – the premier call center magazine for the teleservices call center industry.

The article: AUTOMATION SUCCESS REQUIRES HUMAN INVOLVEMENT looks at how human in the loop machine learning technology is best introduced and how to get the right level of human intervention and at which point.

Automation of contact centers yields promise, although not without humans-in-the-loop to maintain its performance. There are many different flavors for human-in-the-loop AI automation. With new technology appearing, an optimized system is possible with a minimum number of humans who don’t need any data science skills. There is now no reason why the contact center of the future needs to look like those of the present. The same applies for the customer experience too.

You can read the full article here. 

Share this
06 Jul

Warwick Analytics in Global App Analytics Market Report

Warwick Analytics has been included in the latest MarketsandMarkets App Analytics Market – Global Forecast to 2023.

CEO at Warwick Analytics commented in the report: “The app analytics market is growing strongly but faces inhibiting factors that could slow it down. The main one is information overload, i.e., humans not knowing how to use the analytics to take decisions. The way to resolve this is to use AI on the unstructured data alongside the structured data to contextualize analytics to recommend actions.”

Executive Summary
The app analytics market is at its nascent stage and is gaining traction owing to deeper smartphone
penetration and growing number of mobile app users. The market is primarily categorized into 2 types,
namely mobile analytics and web analytics. The mobile analytics software segment dominates the global
app analytics market. Organizations integrate app analytics software into their applications to monitor the
users, revenue, app performance, and ad monitoring and marketing. With these applications, organizations
are focusing on increasing revenues from their business.

A user analytics solution includes user behavior analytics (app buttons clicked, app ads clicked, articles
read, or screens viewed), visitor analytics (location, gender, age, and language of the user, whether the
user is new or old, type of device, operating system, and manufacturer of device), user experience analytics
(to identify popular business flows in the app, usage, and user journey across web, mobile, and wearable
devices), use heatmaps to view performance, problems, popular app screens, and usage data.
The app performance analytics includes cross platform comparison analytics (to analyze how the app is
functioning on different platforms), carrier latency, Application Programming Interface (API) latency, and
uptime, crashes, exceptions, errors, and data transactions of app. The advertising and marketing analytics
includes tracking the viewership of app content, tracking the latest app installs, registrations, shares,
invites, and in-app ads. The revenue analytics includes in-app payments and in-app purchases.
The global app analytics market is divided into 5 main regions: North America, Europe, APAC, MEA, and
Latin America. North America has witnessed the significant adoption and is expected to grow at the CAGR
of XX% during the forecast period, mainly due to technological advancements and recent developments
pertaining to the market. Companies in the region have been involved in partnerships, acquisitions, and
new product developments which are mentioned in the company profiles section in detail. APAC, on the
other hand, is expected to grow at the highest CAGR of XX%, owing to the rapidly growing number of
smartphone users and mobile app downloads in major APAC countries, such as China, India, Japan, and
Australia, and the rapid development of IT infrastructure and the adoption of new technologies in 2016
and 2017.

The app analytics market is divided into various verticals, such as BFSI; retail; media and entertainment;
logistics, travel, transportation, and hospitality; telecom and IT; and others (education, energy and utilities,
and manufacturing). The BFSI vertical is expected to hold the largest market size; however, the telecom
and IT vertical is projected to grow at the highest CAGR during the forecast period.
Increasing use of apps for mobile advertising, implementation of digital strategies, deeper smartphone
penetration, and growing number of mobile and web apps, and smartphone and internet users are the
major factors driving the growth of the market. However, privacy concerns may restrain the growth of the
market.

Major market players, such as Google Inc. (Google), Yahoo Inc. (Yahoo), Amazon.com, Inc. (Amazon), Adobe
Systems Incorporated (Adobe), International Business Machines Corporation (IBM), Segment, appScatter,
TUNE, Inc. (TUNE), AppDynamics, Appsee, ContentSquare, Countly, Swrve Inc.(Swrve), Amplitude,
Localytics, AppsFlyer, Heap Inc., adjust Inc. (adjust), MOENGAGE, App Annie, Apptentive, Taplytics, Inc.
(Taplytics), CleverTap have been leading in offering app analytics software and services to their commercial
clients across regions.

Share this
05 Jul

Warwick Analytics in the Press: Elite Business Magazine

The latest chatbots research from Warwick Analytics has been featured in the latest issue of Elite Business Magazine.

The research shows that most chatbots are disappointing their business owners in terms of functionality and output.

Elite Business is a really cool publication that provides fresh perspectives and representing disruptive solutions. They focus on the startups and SMEs that are spearheading Britain forward. From tech unicorns to entrepreneurs transforming healthcare with AI, we cover the movers and shakers making enterprising exciting.

You can register for free to read the online issues here.

 

Share this
20 Jun

Warwick Analytics in the Press: Contact Centre World

Augmented Intelligence is more intelligent than Artificial Intelligence

Artificial Intelligence is the latest buzzphrase which has sparked debates in contact centers around the world. However, it is safe to say that true artificial intelligence has not even been created yet despite the apparent sophistication of many chatbot apps and digital assistants. Most experts define artificial intelligence as technology with the capability of thinking of itself and making decisions based on its own ideas. It’s going to be years before this becomes a reality, if it actually does (as some experts argue). More philosophically, why would we want computers not to require guidance and input from time to time where the situation is new or uncertain? Particularly when dealing directly with customers as contact centers do.

What we should be using and aiming towards is Augmented Intelligence, that is, man plus machine (rather than man versus machine). The definition of this is inherently vague but essentially it is where software supports human decision-making and actions, and when it carries out repetitive or known tasks but defers to a human for more complex or unique ones. Unless one is familiar with the state-of-the-art of the technology, it is easy to believe the hype. The reality though is that even the most sophisticated AI applications ironically require armies of data scientists to develop and maintain them. For many, the Holy Grail in Augmented Intelligence is an application that is trained and guided by a non-data scientist, in particular so that the front-line personnel in the contact centre are not directly doing all the tasks, but they are guiding the bots which help them.

Read the full article here.

 

Share this
20 Jun

Warwick Analytics in the Press: Customer Service Manager

How to Turn Your Contact Centre Into an Early Warning System

Dan Somers of Warwick Analytics reveals the true ‘Cost of Deviation’ from the customer journey with www.customerservicemanager.com.

Contact centres originally existed to service customer enquiries. They were built as a cost centre. Customer services was seen as one of those things you just needed to do, and efficiencies were around measuring calls handled by agents and little attention to outcomes unless you considered RFT (“Right First Time”).

The paradigm changed when it was realised that happier customers spent more and told their friends. This generates a tension between customer outcomes and cost to serve. Many are still in the transition. This second paradigm is enhanced by digital transformation where self-service and chat enable an easier experience for customers as well as efficiencies for operators.

Read the full article here.

Share this
02 May

Warwick Analytics in the Press: New research finds human validation is critical for chatbot owners

Most chatbot owners are not satisfied with the performance of their chatbots and say human validation is needed to reduce errors.

Although chatbot technology has been around for a while, many businesses do not include human validation to enhance chatbot interaction — and consumers are unhappy with chatbot performance.

In a survey conducted in late 2017 by Chatbots.org, 53 percent of 3,000 consumers, who had used a chatbot for customer service in the last year, found chatbots to be “not effective” or only “somewhat effective.”

US consumers were far harsher in their assessment of chatbots, with 14 percent rating them as not effective versus only 5 percent of UK consumers.

This is perhaps an indication that US consumers ask chatbots more complex questions than UK consumers.

UK-based text analytics specialists Warwick Analytics recently carried out a survey of over 500 chatbot owners and developers. Its findings showed that 59 percent of businesses that have a chatbot are unsatisfied with its performance.

Read the full article here

Share this
23 Apr

Warwick in the Press: Data Science Central, Humans in the Loop

Warwick Analytics has been featured on the leading online publication and blog for Data Scientists, Data Science Central.

The article Humans-in-the-Loop? Which Humans? Which Loop? looks at the different humans that be incorporated into the automation of a contact centre in order to minimise the level of human intervention required whilst maximising performance.

Automation of contact centers yields promise although not without humans-in-the-loop somewhere in the system to maintain the performance. There are many different flavors for human-in-the-loop, and with some novel technology appearing, an optimized system is possible with the minimum amount of humans and without any data science skills. There is now no reason why the contact centers of the future need to look like those of the present and same for the possibilities of customer experience as well.

You can read the full article here

Share this
19 Apr

The future is labels -machine learning for text with automated labels

Labels are how humans define and categorise different concepts. There’s lots of evolutionary psychology, neuroscience and linguistics behind this, but without going into that, without labels human (and other animal) intelligence would not be possible, and maybe not artificial intelligence either. Labels are the algebra of everyday life.

But what’s that got to do with AI? As it happens, quite a lot. When we want to understand what people believe or perceive, we do it by analysing their communication either written or spoken. Let’s say we’re wanting to analyse voice of customer text data.

The classical way to approach this is text mining based on keywords and rules to drive topic analysis e.g. using TFIDF or some other kind of ‘vectorization’, and sentiment analysis of the opinion terms.

There are issues here. Firstly, what are we supposed to do with all the topics? If we build a word cloud how useful is that? If they use synonyms which aren’t in a dictionary, do we group these together in advance? We are essentially trying to second-guess and group terms, which might not match the intentions of the customers, or be different for different situations. Things for sentiment are even more dissonant and we haven’t begun to explore the technical challenges with sarcasm, context, comparators and double negatives which all perform very poorly in such analyses.

So how else are we meant to analyse text data, apart from painfully compiling dictionaries and constant manual checking? Well, say hello to the wonderful world of labels. The labels being referred here are generated from machine learning i.e. by replicating human judgment based on a training sample of manually labelled data. The machine doesn’t need to be told keywords, it figures out common patterns which might be a lot more than single keywords, and might include where they are in the sentence and whether they are nouns or verbs, just as a human might.

So, if labels are so great, why isn’t everyone using them? Well the short answer is that it’s expensive. It’s expensive in terms of time because someone with the requisite domain knowledge needs to generate the labels, and it’s even more expensive because a data scientist needs to use those labels to try to generate a signal using various techniques without resorting to ‘data torture’ (i.e. the phenomenon of eventually getting out of a dataset what you wanted, even if not scientifically justifiable). The problem and approach need to be carefully defined, the data cleansed, parsed and filtered to suit the approach, and frankly a great deal of trial and error. Even if a predictive model is generated, it needs to be tuned, tested for stability and then checked and curated carefully over time in case the data and performance changes (and they always do in anything interesting!). This explains why labelling from a machine learning point of view is precious and only used sparingly for the highest-value use cases.

Thankfully this no longer needs to be the case thanks to the latest technologies. Imagine a world where AI-based labelling is cheap and plentiful, where data scientists are not required to tune and drive models.

Welcome to supercharged labelling. The basic premise is that the labelling machine judges its own uncertainty and invites user intervention to label things manually that it needs to maximise its performance for the minimum human intervention. The human intervention just needs to be someone with domain knowledge and doesn’t need to be a data scientist, and the labelling required is ‘just enough’ to achieve the requisite business performance. No data artistry needed. Also, because it invites human intervention when there’s uncertainty, it can spot new topics i.e. ‘early warning’ of new signals, and keep the models maintained to their requisite performance. If there are differences in labelling, labels can be merged or moved around in hierarchies. If the performance at the granular level isn’t high then it will chose the coarser level just as a human might.

We call this technology Optimized Learning. It has been used to address use cases such complaint handling automation in airlines, generating the topics that cause churn for financial services, assisting chatbots to retrieve the relevant information for the query, measuring the brand attributes for CPG brands and their competitors, and recommending corrective action in machinery and vehicle maintenance.

To spell out the potential savings, suppose a business wants to automate its complaint handling system by building a predictive model of categories for queries (i.e. labels). There might be hundreds of categories and, as a data scientist, you might ask for an estimate of the initial labelling set which could take many man-weeks, with the possibility of not actually finding a signal. Then there’s feature engineering, in itself an iterative activity with no guarantees. If all this takes 6 weeks of labelling, then the latest technology PrediCX might be typically 2% of that i.e. just over half a day to achieve the same performance. Any time spent in feature engineering is also massively reduced as you rapidly test and tune with more certainty and a much quicker feedback loop. Furthermore, the time spent curating models disappears from the data science team and is instead replaced by the minimal amount of labelling when new or ambiguous signals appear. This might be a day or so per year rather than a heavy overhead. You can quickly see that a model that might cost many hundreds of thousands per year of human input might literally only cost a few thousand instead, and be more flexible and powerful in terms of early warning and adaptability.

So now you can see that labels are indeed very powerful in a machine learning context. They move text analytics to the next level and now there are technologies which lower the time, cost and technical skill levels to deploying them. What will you label?

 

Share this
12 Apr

Warwick Analytics research finds human-in-the-loop validation critical for chatbot owners

Our independent survey of over 500 chatbot owners and developers reveals the majority are not satisfied with the performance of their chatbots and say human validation is key.

New research from text analytics specialists Warwick Analytics shows that 59% of businesses who have a chatbot are unsatisfied with its performance.

551 professionals involved in the development or management of chatbots were surveyed by Warwick Analytics.

When discussing the technical challenges respondents faced trying to improve their own chatbots, the most common issues were improving containment rates (90%), reducing errors (83%), and developing the responses for the chatbot (79%).

More significantly, an overwhelming 93% believed that human validation and/or curation was important to maintain and improve the performance of their chatbots.

Dan Somers, CEO of Warwick Analytics says: “Achieving the right level of human-in-the-loop input is key for chatbot owners and managers. Human validation is required for accuracy and improvement but if too much is required then a business may as well have a human service desk. It’s all about finding the right technology that minimises the human intervention required but still increases accuracy. Our software PrediCX does exactly that.”

In addition, 21% of respondents who were yet to deploy a chatbot said it was because the performance of chatbots wasn’t acceptable in their opinion.

Warwick Analytics provides machine learning technology to help maintain and improve chatbots using a human-in-the-loop platform called PrediCX accessible via an API.

 

Download the full report for free here

Share this
26 Mar

© 2018 Warwick Analytics. All rights reserved. Registered in England & Wales. Number 07724630. Registered address 35 Kingsland Road, London, E2 8AA. VAT 120435168.