AI Risks in the Risk Industry
Introduction
Everywhere, from online communities, conventions, and boardrooms, people are talking about AI. There is clearly a tremendous amount of hype around what is an impressive technology. The advent of Large Language Models and improvements to machine learning present a wealth of opportunities. Despite the vast amount of capital flowing through this industry many areas remain surprisingly antiquated and ripe for innovation.
But, at the risk of being labelled a luddite, I am here to convince you that there are considerable risks which need to be understood. The table saw was a revolutionary invention, but I can’t help but wonder how many people were forced to live their lives with less than 10 fingers due to a lack of proper safeguards. My dad is one of those people, with 9.8.
Beware of Techbros
When Teflon was invented, many people sought to find applications for this new chemical and applied it to everything from fabrics, fishing lines, pens, and bullets. None of these proved to be commercially viable until a French Engineer’s wife used it on a frying pan. When new technology is released on the world, there will be lines of people who seek to repackage it and seek a problem it can solve. But often, these individuals try to solve a problem which they have little to no understanding. When I see individuals or companies building out AI tools for insurance where they have little or no experience, I can’t help but imagine what risks were not considered or misunderstood.
This approach is backwards in nature and a lazy form of innovation. So, instead of listening to the engineer who thinks fishing poles need to be nonstick, we should be listening to our wives, or in our case our agents, underwriters, and adjusters.
The Risks
Unknown and Unintended Results
Given the complexity of many LLM’s in use, often, no one can determine precisely what specific output will result from a specific input. A fascinating and worrisome feature of LLM’s is that there can even be any number of outputs based on the same input. While this is useful in some contexts, like using them for brainstorming, it is less than ideal if we do not want variance in our output, such as in communications related to policies or claims.
A more common concern (understandably) is hallucinations. This is where a model partially or completely fabricates facts and presents them as the output. What makes this worse is that, while fabricated, they often are still quite convincing and don’t easily catch a user’s scrutiny. In the US District Court of the Southern District of New York during the case of Mata v. Avianca, Inc. an attorney on Mata’s side presented case law prepared by an LLM. Unfortunately, said case law did not exist. This example highlights perfectly the risks of these hallucinations. No carrier wants to be forced to explain to regulators why a policy cancellation went out citing fabricated reasoning.
Intellectual Atrophy
After the last point, you may be thinking “users just need to know that they still need to double check the results”. Correct, but this point misses two considerations. First, most people quickly determine whether they trust a product or not. Either they don’t trust it and don’t use it, or they trust it and replace their own decision making with that of the model. Thus, proper output review is likely to be neglected.
The next point pertains to these trusting individuals. There are numerous scientific studies demonstrating that various technologies impede our ability to perform the task without them. Spellcheck makes us worse at spelling, GPS degrades our sense of navigation, and calculators make us worse at arithmetic. This doesn’t mean that we are worse off without these technologies. However, when we implement AI as a solution for general problem solving, we risk far more than an inability to mentally calculate a 20% tip at dinner.
Can we trust employees to critically evaluate the output of an AI when they used AI to initially get to the answer?
Hacking Used to be Hard
SQL injection is a method used by hackers to exploit systems allowing them to access data which is not intended to be accessible to the public. They feed cleverly crafted SQL statements into some interface which circumvents the limited intended information which would be provided. There are many techniques like this, known as code injection techniques. Normally, someone with a strong understanding of coding concepts would devise statements like these to exploit vulnerable systems.
If a malicious actor is interacting with an LLM they don’t need to know coding concepts to exploit vulnerabilities. These range from comically simple to intricately drafted prompts. Some examples are asking a model to ignore its prompt template, simply rephrasing a malicious request in an unorthodox manner, or hiding instructions from a human user who inadvertently enters them into an LLM.
In this last method, imagine an agent receives an email from a malicious actor requesting a new policy. The agent feeds this email into an AI underwriting tool to obtain a quote not realizing that there are tiny white font sentences prompting the model to exclude certain pricing features which may apply to them or to send them information which they are not authorized to have.
Conclusion
I frequently use AI tools and believe they will improve many aspects of our lives. But a lot of the discussion around AI makes it sound like magic. It’s not magic. It’s not intelligent. So, I encourage the industry to utilize it, but to consider the risks. Don’t be the French engineer, slapping Teflon on any random product and don’t be like my dad, forced to bear the price of hasty decision making.