While most of us are just getting a taste of artificial intelligence in insurance, the hype around it seems to be stabilizing. In my work with the Agents Council for Technology, I’m hearing more people in the insurance industry talking about how AI is being used today and what agencies should consider when adopting AI.
AI has potential to impact our society in ways that are both exciting and scary. But it’s important to remember the adage that we often overstate how quickly a change will impact us and understate how much change will ultimately occur over time.
With that backdrop, it’s safe to assume that AI will have material impacts on all of society over time, but that the immediate impacts may be less than the early hype might assert. Even so, there are things we in the insurance industry can and should be doing today to keep learning, enhance our capabilities and minimize our risk when it comes to AI.
Here are some best practices and considerations agents should lean into as they explore AI.
Keep an open mind
Like it or not, artificial intelligence is changing the insurance industry. I recommend that everyone act with intention in learning what they can about AI. That can come from listening to podcasts, reading blogs, watching webinars, getting involved with industry associations, talking to your peers and having open conversations with your carriers and technology partners.
Try it for yourself
One of the best ways to learn about AI is by trying out an AI tool for yourself. There are several very good free AI tools available. Sign up for one of them and begin experiencing how it works, what it can do for you, and what limitations you encounter.
Don’t use private data in free, public AI tools
Exercise extreme discretion with what information/data you use to input with your prompts in any AI. While privately and purpose-built models can be safer, never enter any personally identifiable information into a free, public AI tool such as ChatGPT.
Understand what AI is and is not
While none of us know how AI will evolve, today, AI is “just” incredibly sophisticated statistical modeling and predictive algorithms. As impressive as AI can be, the output of an AI tool is highly dependent on the quality of data feeding the specific AI. AI tools focus on solving and executing specific things based on why the tool was built by humans.
At this time, AI is not able to experience emotion and largely cannot differentiate between right and wrong. AI tools can be prone to delivering factually inaccurate information, fully made-up examples, or even harmful bias.