Vast swathes of information – from sources including weather forecasting, mobile phone use and social media – could be used to challenge assumptions and mitigate risks
Harnessing the power of big data will give the insurance industry unparalleled insight into risk, but the sector is still discovering the best way to tap into it.
Big data refers to firms’ ability to collect and interpret the huge amounts of data generated in the modern world. IBM says that 2.5 quintillion bytes of information are created every day - so much that 90% of all data in the world was created in the past two years.
Here, GR speaks to big data experts to discover some of the opportunities the concept presents and a few practical examples of how it can give the insurance industry unique clarity about risks in emerging markets.
The main use of big data in the insurance sector is to improve catastrophe modelling software. IBM associate partner Alex Plenty says that catastrophe modelling inevitably relies heavily on assumptions, because of the relatively low frequency of actual events.
“You can bring in big data to refine, challenge and give a level of credibility to those assumptions by probing into underlying data sets,” he says.
Big data can also hone catastrophe modelling by improving how technology companies work out the probability and economic costs of a natural disaster, according to BNY Mellon head of insurance services for Europe, the Middle East and Africa Paul Traynor. “Big data helps you challenge the models that are out there,” he says.
Big data is particularly useful for (re)insurers in developing countries, where there is less risk information available than in more established insurance markets. In these places, big data can help improve underwriting accuracy and claims-handing efficiency.
One source of big data that could be valuable for the insurance sector is mobile phone use - for example, phone records and Twitter can be used to track how people behave after a disaster, and the insurance industry could use this information to update its catastrophe models.
“You may not have a good idea about how many people are in an area, but as these messages and calls are geocoded and are rooted in place as well as time, you can see what the impact of people-movement is in a given situation,” Plenty says.
Facebook can also be used to gather information about disasters. Plenty says the US Meteorological Society has analysed a Facebook page about the impact of a tornado strike on property.
The society homed in on information about how far objects had been thrown, then used that information to update its catastrophe models.
“We have a much better understanding of the effects than we had previously, using other, more scientific methods,” Plenty says.
Weather forecasting can also be used to help mitigate losses, Plenty says. For example, a severe hailstorm prediction could be relayed to policyholders in the path of a storm, to slash overall insured losses.
“That’s definitely something people are looking at closely, on quite a broad scale,” he says.
Big data could help clients with their risk mitigation, too. Plenty says IBM has built software that analyses firms’ disaster mitigation plans. If a disaster strikes, the software can analyse the plans and work out how effective they will be, given the level of the catastrophe.
“As more mitigation plans come into place, analysis of the resilience of the area affected is changing in real time,” he says.
Traynor says the big data intelligence gathered by the insurance sector could be bundled up and used to improve risks long before they are insured.
“A practical use would be to take those learnings and bring it into construction codes.”
The opportunities for big data are clear. The challenge for the (re)insurance sector will be to identify sources of big data, build the software to analyse that and then work out how to use the results to improve their performance and service. It won’t be easy, but the companies that crack big data will have a definite competitive advantage.