Ask any insurance company, and they’ll tell you that industry is built on the back of data. For years, insurance companies have judiciously collected information on people – where they live, statistics about claims – and the more information they have, the better they are at pricing insurance products. Actuarial science is legitimate and has worked for decades. Until now.
There’s a new player in the game that has the potential to blow historical data sets out of the water. Enter machine learning. ML techniques are opening up whole new ways in which companies can use data to become better at measuring risk and pricing insurance products. For an industry that has a proven track record of building their business out of historical data sets (information about the past), it’s understandable that their first inclination would be to use machine learning techniques to examine the data they already have. And they’re not entirely wrong. It’ll provide marginal improvement in their existing data and analytics.
Beyond applying machine learning to historical data sets, insurance companies need to break the mold in the way that they think about data. Think of Amazon. The eCommerce giant has acquired hordes of information on customers’ spending habits. They know where their customers live and how much they earn. The same goes for Google. By examining your search patterns, the behemoth platform can pinpoint not only exactly where you are in the world but also precisely which mechanical toothbrush you’re currently coveting. Those are some deep insights and these companies are infinitely better equipped at artificial intelligence and machine learning compared to insurance companies.
Today’s rapidly changing digital landscape has afforded a host of data like never before. It’s the application of these seemingly uncorrelated data sets that have the potential to generate superior pricing capabilities. There’s a reason why 95 percent of insurers have hastily invested in a life raft of machine learning tools. They’re on the right track. But what they should be doing is using these new tools to both procure and analyze data that hasn’t yet been obtained.
At some point, we’ll see one of these larger tech companies that have vast amounts of data easily step into the space and start competing with insurance companies. Google or Amazon could turn around and leverage their data overnight. It’s just a question of whether or not they have the desire. The threat is real, and it’s up to insurance companies to evolve with the changing times, turning a potential threat into a tangible opportunity.
In order to combat the threat brought on by machine learning, insurers must change the way they think about data. It’s not all about historical data amassed over decades. It’s about proactively locating alternate forms of real-time data and applying new machine learning techniques. If they don’t, someone else will. And they’ll have a whole slew of new competitors on their hands.
Admittedly, every insurance company thinks that they’ve got the best data and pricing algorithms. It’s a large part of their IP. As such, they have a history of being reluctant to work with external third parties because it exposes one or both of those precious commodities to an outside party. They’re paranoid, and rightly so. Yet in order for insurance companies to succeed and stay current in a market rife with new technological threats, they’re going to have to be open to working with new partners.
Moving forward, it would behoove insurance companies to think about sourcing data as a partnership model, rather than an in-house collection function. There’s a real need to create redundant sources of data to flow into an environment, rather than relying solely on the proprietary data that’s traditionally been a competitive advantage. Therein lies the key to ensuring ongoing competitiveness into the future.