We are currently looking for a Senior Developer to join our team in Lisbon, Portugal. You’ll be joining our small, but growing computer science team, creating SaaS and on-premise solutions.
We are currently looking for a Senior Developer to join our team in Lisbon, Portugal. You’ll be joining our small, but growing computer science team, creating SaaS and on-premise solutions.
Pygmalion was a Greek myth about a sculpture who brought one of his sculptures to life. The myth has become reality: Modern-day Pygmalions live in the realm of data science where they are deploying AI to bring automation
and autonomy to many facets of our lives.
Whilst there are a lot of fanciful headlines and hyperbole about the latest algorithm, the reality is that to deploy a machine learning model in an operational environment, it needs to be trained well on relevant data, and if the environment changes, to continue to be trained so that it adapts.
In the world of customer interactions and customer experience, there are many machine learning techniques being applied e.g. to try to automate customer services and contact centers, processes, as well as garner insight from the ever-growing ocean of voice of customer data such as surveys, complaints, reviews, call logs and social media. The challenges of building accurately predicting models are hard enough, but also the signals are changing all the time in both nature and mix: New products get launched, new ways of talking about the same things appear, new channels require new data structures (e.g. business chat and chatbots) and perhaps more significantly, customer expectations are changing all the time, sometimes driven by experiences outside the industry in question. For example it's no exaggeration to say that the simplification of devices by the likes of Apple and ease of shopping from Amazon has led to change in expectation, and indeed the expectation of change. In a recent study by Accenture, only 7% of brands are exceeding customer expectations and 25% do not meet expectations.
This leaves the machine learning experts in a quandary: How can businesses develop machine learning models which automate processes and contact centers not just today, but reliably ongoing? How can they get continually rich insight from models when the data are changing around them?
André Louçã presents a thought provoking talk in this Energized Labs video, detailing how the path of Warwick Analytics and Machine Learning have changed and developed over time.
Watch the video now to hear André explain the main technology developed by his team, and a demonstration of Predicx, showing that there is no need to have huge teams of labellers when only one person is able to maintain the model providing trustable output.
The report analyzes competition and latest developments on the future Predictive Maintenance market. On the basis of major manufacturers, the global Predictive Maintenance market is segmented based on the key manufacturers, growth rate, Predictive Maintenance revenue, research and modification taking place. In addition, it gives the rise in opportunities for companies in the Predictive Maintenance market. Some of the outstanding manufacturers in the Predictive Maintenance market enclosed in the report are Warwick Analytics (PrediCX), SKF, PTC, Robert Bosch, SAP SE, IBM, General Electric, Rockwell Automation, Software AG and RapidMiner. The report also includes a detailed analysis of Predictive Maintenance key market segments and sub-segments.
From a geographical prospect, the report studies the Predictive Maintenance market across regions such as North America, Europe, Latin America, Asia Pacific, Middle East and Africa. The regional market will get advantage from the well-established Predictive Maintenance framework and the high level of digitizing in the region’s Predictive Maintenance sector.
You can get a sample copy of the report here.
A lot has been made of digital transformation, and how many businesses are using self-serve web-based applications to engage with their customers, employees and other stakeholders to be able to enhance and in some cases reinvent the customer experience often with both a stickier customer journey and lower service costs. Uber and AirBnB are often held up as the poster-boys but there are many businesses who are not ‘digital native’ companies emulating in their own way.
As with so many buzzphrases, there is usually a less-sexy way of saying the same thing which has been around for a long-time. In the field of customer interaction, most people will think of digital transformation as the growth in chatbots and social media-enabled communication. However I would argue that the main bastion of change has to be directed at FAQs.
Sexy or not, FAQs used to be the only way to read about self-help and avoid calling a contact center. They are frequently cited as inherently flawed as these blogs from the UK Government and eloquently in this technical writers blog. Yet if you stop and think, a well-structured FAQ if it is searchable with natural language is a critical asset as it is really the same thing as a chatbot, but perhaps without the charm or manners.
In a more measured manner, really FAQs are part of a spectrum of communication channels (one-way and two) where customers can solve problems. They sit alongside forums, social media, chatbots, chat, phone and email (see diagram below).
Also surveys and reviews can trigger interactions depending on the content. The current state of most organisations is that all these elements are separate silos, and whilst the customer experience team are trying hard to break these silos down, there are few who would see FAQs as on the same spectrum as chatbots and forums. Also people’s expectations of FAQs are to see a laundry list of requests which is not how they desire to interact. Imagine though if you could write your query in any way you wanted into a search bar and it would retrieve the correct response. Imagine also that the search was entirely consistent across all channels. Is this just a fantasy?
Machine learning for text is capable to classify interactions to be able to automate responses to natural language. However as we see with chatbot fails, it is hard to get this right due to the complexity and variability of human dialogue and chatbot containment rates are still below where their proprietors would want them to be. Further complexity is that human dialogue varies immensely across the channels: People don’t write in emails how they use chat which is different again to forums, nor how they speak or write a complaint, or even fill in a survey. By way of an example, a study at an airline found that the average topic in a chat was just over one whereas in a call it was nearer to two and in a complaint it was two and a half. People use different channels for different things, and also use different channels for the same thing in different ways. If the company is trying to classify aka tag or ‘label’ each interaction, then it will very easily fall into the trap of having different categories or tags for different channels, not by design but because it is hard to normalise them whatever technology you’re using. This phenomenon doesn’t really have a formal name but it is rife and disruptive. The ideal is some kind of ‘homogenization’ of the tags i.e. so that “late shipping” can be the same concept whatever channel. This then allows the guardians of the customer journey to understand what’s going wrong (and right), get a global view, and also understand if customers are calling back about the same thing on a different channel because they didn’t get it resolved. This also means that the customer journey and knowledge base can be fixed once for each breach, in the knowledge that it is fixing things across the board.
Machine learning can help this harmonization process although it is fraught with challenges, not least because the models for each tag need to be built especially for each channel, for example the “late shipping” tag for chat will need to be a different model to the “late shipping” tag for email or complaint. What data scientists know is that the process of building the machine learning models is intense: New York Times estimated that up to 80% of a data scientist’s time is spent “data wrangling”. CrowdFlower estimates “data preparation” at 80%. Further, assumptions and errors are an inevitable part of the process where human judgement and skill is required. More than this, 76% of data scientists view data preparation as the least enjoyable part of their work. Furthermore someone needs to build a training set for the models and that typically involves a human somewhere labelling the various interactions into topics and a topology that can drive the correct response. This is laborious in a linear fashion.
There are a number of different approaches to this problem. One company addressing this problem in a novel way is Warwick Analytics which is a spin-out from The University of Warwick. It has developed a proprietary technology called ‘Optimized Learning’ which puts a ‘human-in-the-loop’ in a very effective way: What this means is that the technology classifies the customer interactions in a meaningful way but when its certainty is low, it asks for assistance for a human to classify or ‘label’ the interactions which provide the most information back to training the models. Therefore it is theoretically and practically guaranteed to involve the minimum human interaction to maximise the performance of the models and hence the accuracy. The human trainer can be offline, as well as involving the customer in certain circumstances. The company has worked with many enterprises to improve chatbots, automate contact centers, complaints handling and improve the quality of self-service and FAQs.
So in conclusion, FAQs are an old-fashioned and much discredited digital experience, yet in the new world of digital transformation and harmonization, they can come back to center stage thanks to some clever technology and the human-in-the-loop.
Warwick Analytics has been featured on Data Science Central discussing how ‘Data Scientists need Designer Labels Too’.
When we want to understand what people believe or perceive, we do it by analysing their communication either written or spoken. Let’s say we’re wanting to analyse voice of customer text data.
The classical way to approach this is text mining based on keywords and rules to drive topic analysis e.g. using TFIDF or some other kind of ‘vectorization’, and sentiment analysis of the opinion terms.
Thankfully this no longer needs to be the case thanks to the latest technologies. Imagine a world where AI-based labelling and machine learning for text is cheap and plentiful, where data scientists are not required to tune and drive models.
There are issues here. Firstly, what are we supposed to do with all the topics? If we build a word cloud how useful is that? If they use synonyms which aren’t in a dictionary, do we group these together in advance? We are essentially trying to second-guess and group terms, which might not match the intentions of the customers, or be different for different situations. Things for sentiment analysis are even more dissonant and we haven’t begun to explore the technical challenges with sarcasm, context, comparators and double negatives which all perform very poorly in such analyses.
So how else are we meant to analyse text data, apart from painfully compiling dictionaries and constant manual checking? Well, say hello to the wonderful world of labels. The labels being referred here are generated from machine learning i.e. by replicating human judgment based on a training sample of manually labelled data. The machine doesn’t need to be told keywords, it figures out common patterns which might be a lot more than single keywords, and might include where they are in the sentence and whether they are nouns or verbs, just as a human might.
Warwick Analytics has been announced as one of 20 global RegTechs, start-ups offering technology solutions for financial firms’ regulatory challenges, that will join Accenture’s sixth FinTech Innovation Lab London.
During the three-month fintech accelerator programme, which runs Jan. 2 – Mar. 22, Warwick Analytics and other fintech start-ups will be partnered with executives from banks and insurers to fine-tune and develop their technologies and business models.
Accenture launched the RegTech stream in response to an increased pool of start-ups offering solutions for compliance in a year in which the financial services industry faces unprecedented levels of regulation. Among the new regulations this year are the revised Payments Services Directive (PSD2), which requires banks to make customer data available to third parties, with the customer’s consent; the General Data Protection Regulation (GDPR); and the Markets in Financial Instruments Directive (MiFID II), which went into effect last week – all before structural banking reforms, with ringfencing, are implemented in January 2019.
The 20 companies on this year’s shortlist of innovative startups come from the U.K., Israel, Croatia and South Korea, offering technology solutions for many pressing business issues, including:
“The risk of non-compliance is what keeps financial boards awake at night,” said Julian Skan, executive sponsor of Accenture’s Fintech Innovation Lab London. “As the drive for better customer experience and lower unit costs pushes data into the cloud, the price of getting things wrong has risen. It’s a pivotal moment for technology solutions to help banks and insurers not just to meet the needs of regulators, but make the most of the digital economy.
“Above all, financial firms know they need to improve their productivity, particularly in the UK economy, and innovations can be the lightbulb moment for banks and insurers to operate more effectively and deliver better results. This year’s cohorts have shown how start-ups are learning to focus on problems that can be solved and to understand what they need to learn from incumbents who are facing the challenges of meeting digital customer expectations with legacy infrastructure.”
More than 270 start-ups from 42 countries applied to this year’s program, with the shortlisted start-ups being mentored by the program’s biggest-ever cohort of financial services executives.
Partners come from over 32 financial institutions including: AIB, AXA, BAML, Citi, Credit Suisse, Direct Line, DNB, Ergo, Goldman Sachs, HSBC, Intesa Sanpaolo, JPMC, Legal and General, Lloyds Banking Group, LV=, Morgan Stanley, MS Amlin, Nationwide, Nordea, OP, Post Office Management Service, RBS, RSA, Santander, Societe Generale, Towergate, TSB, UBI, UBS, XL Catlin, Zurich.
Dan Zinkin, a managing director at JP Morgan Chase, said, “Financial firms have an important role in collaborating with start-ups to develop new technologies that can transform our industry. We must keep ahead of a rapidly changing world and keep striving to innovate for our customers and improving our services. I am thrilled to be a part of a program dedicated to bringing financial firms and entrepreneurs together to navigate the future of the industry.”
Eight of the 20 shortlisted startups will go on to present to venture capitalists and financial-industry executives at the program’s Graduation Day on March 22.
Accenture and a dozen major banks launched the FinTech Innovation Lab London in 2012, with support from the city’s mayor and other government bodies. Since its launch, 56 start-ups have participated in the London Lab, securing more than 50 contracts with global banks and creating more than 800 jobs.
The London Lab is modelled on a similar program that Accenture co-founded in 2010 with the Partnership Fund for New York City, the US$150 million investment arm of the Partnership for New York City. In 2014, Accenture launched FinTech Innovation Labs in Asia-Pacific and Dublin. Globally, the Labs’ alumni companies have raised more than US$863 million in financing after participating in the program.
Processes within the financial services industry have become more automated in recent years. Processes such as customer services, spotting fraud and error, credit scoring and processing insurance claims are becoming more automated thanks to predictive analytics and machine learning (sometimes referred collectively as “AI”). And there is plenty of evidence that automation is becoming more widely known and accepted. Indeed The Financial Times has reported that the CFA Institute is updating its Certified Financial Analyst (CFA) exam so that starting in 2019, it will include questions about artificial intelligence (AI), big data and robo-advice, reflecting the growing impact of machine learning on the finance industry.
But whilst it all sounds encouraging, the roll-out isn’t as fast as many commentators expect or hope. According to a PwC Report 2017, only 30% of large financial institutions have invested in AI. This could be because much of the core technology hasn’t changed for decades, be it decision trees, neural networks, Bayesian statistics etc. The practical application is also limited because of the amount of work that data scientists need to spend building and then curating machine learning models. Most importantly however, the complexity of the financial services industry is increasing, driven by ever-changing consumer behaviour and expectations, new disruptive businesses, the sophistication of fraudsters, the explosion in data (including a lot of unstructured data) and indeed more regulation.
This complexity is slowing the progress of AI, as it requires more data scientists to train and deploy the algorithms and cleanse and handle the data. This is particularly true when the processes and datasets involve unstructured data such as text:In a recent survey carried out by Warwick Analytics on AI in text, surveys, social media and queries/complaints were identified as the most common datasets being used. However, most analysts (53%) using text analytics wanted more insight from it and nearly half of those (23% out of the 53%) were not satisfied with the output of the analysis that they were getting.
Customer expectations are changing, with more interactions across more channels, and larger, richer datasets and increased need for personalisation and segmentation.Criminals are also getting more sophisticated with their own technology and organisation and financial institutions needs to move more swiftly to stay ahead.In the larger institutions there are still many humans interacting with customers and making operational decisions in front-, mid- and back- office which could be done more effectively by, or with the aid of AI (sometimes called “Robotic Process Automation” or “RPI”).
These new challenges within the industry require more sophisticated technologies and solutions and long-established business models are being disrupted by fintech newcomers that are creating new services, disintermediating the traditional value chain, and driving down costs. 88% of incumbents are increasingly concerned they are losing revenue to rival innovators (PwC Global Fintech Report 2017).
The good news is there are AI solutions such as PrediCX appearing which help to automate data science itself, and minimise the input (both the time and skill-levels) necessary from humans to adopt and deploy.
Warwick Analytics has been published in Applied Marketing Analytics. The paper ‘How machine learning is developing to get more insight from complex voice-of-customer data‘ introduces a new type of machine learning for voice-of-customer data and discusses its advantages, use cases and implementation compared with previous machine learning methods and text analytics.
The Big Data revolution has meant that there are nuggets of insight within customer data everywhere: customer relationship management data, reviews, complaints, enquiries, surveys, social media etc. This applies to employees too, e.g. engineer logs, staff comments and forums etc. The ability to harvest and analyse such data in an automated way to provide predictive, actionable insight is a holy grail for marketers and customer experience professionals. It can also help to provide automation in the customer journey, for example, by improving the artificial intelligence of chatbots and customising the customer journey depending on what the customers say and how they act.
However, across all organisations, ever-expanding amounts of data remain unanalysed, primarily due to their growing size and complexity. Furthermore, most of these data are unstructured or raw. Unstructured data such as text, image, audio, video, machine and sensor information all present major issues for organisations and the data scientists they employ. It is estimated that over 90 per cent of the data in existence today are unstructured.1
But how fast are these complex data growing? To put this in perspective, it is estimated that 90 per cent of all data in existence today were generated during the last five years. The digital universe is doubling in size every 12 months. Indeed, it is expected to reach 44 zettabytes (44 trillion gigabytes) in size by 2020 and will contain nearly as many digital bits as there are stars in the universe.
Buried deep within this mass of complex raw data are insights critical to innovation, decision making, customer service and revenue — that is, if they can be extracted with the right analytical tools.
At the moment, however, organisations are simply unable to access this insight. Some commentators estimate that less than 0.5 per cent of data are currently analysed.4 Meanwhile, IDC estimates that by 2020, as much as 37 per cent of the digital universe will contain information that might be valuable if analysed.
Applied Marketing Analytics is a subscription only journal but if you would like to receive a copy of the paper/article please email email@example.com.
25 deep tech scaleups have been selected to drive Europe’s digital transformation for the benefit of the European citizens and economy.
Scaleups from 9 EU countries will compete in the EIT Digital Challenge 2017 for a €100,000 prize package.
Winners will be awarded at final pitch events in Eindhoven, Berlin, Budapest, Madrid and Trento at the end of the year.
Warwick Analytics have been selected in the Digital Infrastructure category.
The contest aims to detect the best deep digital technology (or deep tech) scaleups that promise to improve the life of Europe’s citizens and strengthen the economy. Deep tech innovations are complex and disruptive solutions that are built around unique and differentiated scientific or technological advances. They fuel the digital transformation and in many cases address the main societal and and environmental challenges of today and tomorrow.
The EIT Digital Challenge will select the best scaleups in five categories and award each of them with a €100,000 prize package to accelerate their international growth. The prize includes a full year of dedicated support from the EIT Digital Accelerator and a cash prize of €50,000.
A total of 136 tech companies from 20 EU countries had applied.
© 2019 Warwick Analytics. All rights reserved. Registered in England & Wales. Number 07724630. Registered address 35 Kingsland Road, London, E2 8AA. VAT 120435168.