Read the first in our series of preview articles on Monte Carlo Rendez-Vous hot topics

Big data image


Big data and the insurance market – everybody is talking about it but nobody is doing it, writes Xuber business development director Richard Clark. 

A lot of column inches have been dedicated to the subject, clever people have scratched their heads about it and we seem still no closer to finding a way through. 

Last year saw big data make it onto the reinsurance agenda in a big way. It was a main theme at Monte Carlo in 2014, which produced countless articles on how reinsurance companies struggle to understand how to make the best use of the data they are accumulating – and how heavy data crunching should give reinsurers an advantage over rivals and boost their bottom line.  

The problem is that most people don’t really know what big data is and what impact it will have in the way businesses operate and how we consume information. Additionally, there is the thorny issue of what it might mean to the future of our sector – for example, Google has already started collaborating its huge and unstructured wealth of information and looking at how the different insurance companies price the same risk. The likelihood is that the search engine could start to underwrite insurance itself in the future.  

Let’s start at the beginning. Big data is a term for data sets so large or unstructured that traditional data processing applications cannot cope. This produces challenges to businesses which include analysis, capture, data curation, search, sharing, storage, transfer, visualisation and information privacy.  

Analysis of data sets can unearth new correlations, hidden buying patterns and help spot business trends. If you just think about the sheer amount of data that is collected every day through the electronic devices and computers we use, the question becomes how to harvest this information and how to apply it to doing business. 

THE DRAWBACKS

Whilst thrilling from a business perspective, the use of big data has made the public wary - fearing that the swathes of personal information about themselves could be used unethically.  

The insurance sector has already been warned to use big data responsibly, with the Financial Conduct Authority (FCA) stating in March that it was to look at the way insurers use data to understand customer behaviours and trends.  

The watchdog’s annual report highlighted the risks to the entire financial sector as the pace of digital innovation quickens. FCA Chief Executive Martin Wheatley said the regulator would continue to monitor the growing use of digital technology, including a market study to investigate how insurance firms use web analytics, behavioural tools and social media analysis to gain insight into increasingly large volumes of data. 

When it comes to business to business, the path is less clear still for the use of big data. In the global wholesale insurance and reinsurance markets, however, we are beginning to make inroads. 

For example, we can use the science of weather patterns to help with things like crop insurance. 

The potential applications for big data in insurance are boundless. If you are looking at reinsurance, you need look no further than the London market. If the industry could harness all that data circulating in the market, if it was collected in a shared store, the analysis of historical information on different classes of business and geographies would surely lead to better underwriting and risk management. 

But despite the obvious benefits, we seem to stop short of leveraging this wealth of information. Why? Is it just too daunting a task? Too big a chunk to bite off? 

Perhaps there is another way. We already have semi-big data at work in the market. Maybe we could have a ‘big data lite’? Target some of the low hanging fruit, to start with. 

The underwriting and claims stages have always been particularly data rich, with insurers using modelling techniques, statistics and historical data for fraud prevention, marketing, claims management and pricing risk.  

There are no technological barriers to harvesting this data. What is stopping the market from moving forward is the guarantee that this data is wholly anonymised, that proprietary information is not divulged, that the competition does not find out what it knows about its clients. 

To get past this, we need to look at the problem in a new way, and kill some ‘sacred cows’ in the market to get there. We have to find a different way to look at who is actually the competition. When we understand this, the impetus may be to take those first steps. 

The London market is facing competition from other shores, namely, emerging markets, alternative capital sources and the entrance of non-traditional competitors. In the face of this competition and in order to survive these choppy waters, London collectively needs to collaborate to differentiate itself from the rest of the world, whist still retaining the ability to compete internally.  

And the industry will continue to face competition from companies that have started to dabble in the ‘internet of things’ – a new technology phenomenon where piles of data are being collected using network-connected sensor devices, which are embedded in different types of industrial equipment.  

All is not lost though. What insurance and reinsurance companies can do is prepare. They can make sure they are ‘big data ready’ – ensuring that their systems and applications are capable of harvesting and storing data for analysis. They can also look at how their IT systems can be adapted to accommodate wider data sets, so that when the London market begins to share its data, it will have the systems in place to store its own information.  

It will be this shared, anonymised data, that will keep the London market ahead of the game, and starting with ‘big data lite’ might just be the way forward. 

 

INSURANCE BIG DATA TIMELINE

It is safe to say that insurers and reinsurers spearheaded the concept of big data. Cuthbert Heath, an innovative underwriter in Lloyd’s of London in the 1870s, was the first Lloyd’s underwriter to gather detailed probability of loss statistics; he would pay for historic documents and maps detailing windstorms and earthquakes - leaving a legacy for the Lloyd’s modern day underwriter. 

Since then, it has become commonplace for actuaries to crunch data and for insurance and reinsurance firms to employee analysts and software providers to evaluate risk and even to try and predict future events. 

It is the developments using social media that is the new phenomenon. Take the fascinating social media activity that occurred during and directly after the earthquake and tsunami in Japan in 2011. Twitter, Facebook and other social networking sites became an invaluable tool for millions of people caught up in the aftermath of the Japan earthquake.  These sites became a lifeline for many when mobile phone networks and some telephone landlines collapsed in the hours following the 8.9 magnitude earthquake.  

Since then, researchers at the University of Tokyo have said they can detect when earthquakes are occurring with 96% accuracy by filtering Twitter messages for certain keywords and frequency. Meanwhile, Google, Twitter and other online companies have pledged to work more closely together in future disasters. In September last year, Google helped organise a big data workshop to analyse information from the 2011 earthquake. Google provided data on search trends and Twitter supplied a week of Twitter messages from after the disaster. Honda supplied data such as car location information from its online navigation system.