This is arguably a golden age for leveraging technology in the insurance industry, with artificial intelligence use cases driving much of the current focus. AI is revolutionizing every step of the insurance process, from marketing, to underwriting, to claims to detecting fraud, potentially transforming the way we work.   

We’re still in the early stages of the AI boom and only beginning to grasp how it will be woven into the fabric of our lives. Use this FAQ/guide of essential AI terms and applications to gain a better understanding of the technologies you’ll likely run across in your AI journey.  

First off, what is AI? According to IBM, “Artificial intelligence, or AI, is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.” 

There are many types of AI, but for the purposes of this article, we’ll focus on two of the more common ones. While so-called traditional AI has been developing since the 1950s, much of the current focus is on generative AI which is becoming widely available. 

What’s the difference between traditional and generative AI?  

Traditional AI is the more structured rule follower of the two, while generative AI is its creative cousin.  

Traditional AI shines at problem solving and can be used for specific tasks, such as decision trees, analyzing data to reach conclusions, or even editing out awkward pauses and “ahs” and “ums” in that video you just recorded for a client video proposal.  

Using a simple request, Generative AI can create original content such as text, music, pictures, and video. Common text-based Generative AI models include ChatGPT, Claude from Anthropic, and Microsoft Copilot. Frequently used image-based AI models include Dall-E2, Midjourney and Vivid AI. There are many others out there, including a wide range of AI audio and AI video generators 

Today, many insurance agencies are using generative AI to create the first draft of marketing content, images for blogs, and initial cuts at client email responses. Other AI tools summarize key aspects of policy documents in non-insurance speak to share with clients.  

How does Natural Language Processing (NLP) fit into this? 

Natural Language Processing is a subset of artificial intelligence that can understand, interpret, and generate human language. NLP helps our new AI overlords sound more human when responding to typed text or spoken voice queries. 

You may come across NLP in customer support systems. For instance, if a customer asks about the details of their insurance policy, an NLP-enabled AI can comprehend the question and respond with relevant policy information. 

Where does AI get its information from?  

Various AI models can analyze vast amounts data, including text, images, audio, video, and programming code. It may pull it from the internet, databases, or that private customer information you’re about to feed into ChatGPT. NOTE: Don’t do that

That data allows AI to provide detailed responses in text or voice, images, audio and video.  

Here’s an important caveat: AI may seem smart, but always check what it creates.  

AI can get it wrong?  

Yes. AI can create inaccurate or misleading answers. The data used to train it might be incorrect or have biased information. Or it might make assumptions when asked a question lacking context and jump to an incorrect conclusion (how human!).  

AI aims to please and will try to provide an answer, even if it turns out it’s not correct. Researchers call these hallucinations.  

Hallucinations in AI are instances where responses are not based on actual data or reality. What may appear credible can be fabricated or downright misinformation. 

Real-world examples of AI hallucinations include the lawyers who submitted fake legal cases made up by ChatGPT. And image-generating tools have famously struggled with getting the number of fingers on hands correct.