• Subscribe
  • Q&A: What makes AI models hallucinate?

    Published on
    May 7th, 2024
    Category
    News
    Share On
    This article is a part of Ask Kat, a Q&A series in our AI newsletter, Deeper Learning. You ask questions about AI, then we get the answers and explain them in simple, non-jargony terms.
    What makes AI models hallucinate?
    This question was submitted by @josej30.
    Did you know if you ask “Who is King Renoit?” on OpenAI’s playground, it will tell you that King Renoit reigned from 1514 to 1544. The only problem is that King Renoit is totally made up.
    So why would AI generate a fake response? Before I get to that, this is one thing I find interesting about humans and AI. We tend to personify AI and its chatbots when we don’t have much information about how they work. And of course we would. It’s called “artificial intelligence” after all. But the latter half of that phrase is kind of a misnomer. In reality, while neural networks are built to mimic the way neurons in your brain work together, they don’t reason quite like we do. They are “essentially a complex prediction machine.”
    Here’s a fun way to get a glimpse of how those predictions play out — a project called “Look into the machine’s mind.” A team of data scientists gave the ChatGPT API texts like “Intelligence is…” and charted the many varied responses the model would come up with. “Given a text, a Large Language Model assigns a probability for the word (token) to come,” the team explains, “and it just repeats this process until a completion is…well, complete.”
    So back to what happens when AI hallucinates. The model is simply trying to predict the best combination of tokens/words it can (like that visualization above)— the combination it thinks will satisfy you even if, say, it was not trained on enough data. I.e. Even if the model doesn’t have the right answer, it might give you an answer.
    Okay, but how do the models predict each of their words — or as @josej30 put it, “What’s the math behind it?” And, even if LLMs don’t “think” the same way we do now or have all the information, context, and nuance we have, can we get them there?
    We’ll dive into those questions more in upcoming Deeper Learning editions. At a high level, the explanation for how LLMs make their predictions starts with parameters, which you can think of as settings that guide the LLM’s learning. But things get pretty complex from there as you start to learn about how parameters and “embeddings” predict tokens.
    As for tackling hallucinations and, let’s call it, thinking better, there are several methods being implemented and refined now, including a recently popular technique called RAG or Retrieval-Augmented Generation. RAG is when an LLM is optimized by using knowledge fetched from external resources. Come back next week to go deeper!
    Got a question for us? Add it here.
    Get only the highlights of AI in your inbox weekly - the stuff you can chat about with your friends and maybe even share with your grandma. Subscribe to Deeper Learning.
    Comments (15)
    Lily Collins
    The exploration fnaf of AI hallucinations in this article is truly fascinating, shedding light on the intricacies of how AI models generate responses and the efforts to improve their accuracy and reliability.
    bettyking
    LLMs aim to predict the best combination of words (tokens) based on their training. They generate responses that they believe will satisfy the user's query, even if they lack sufficient training data on the specific topic @io games
    marry099
    Thank you for sharing, which gave me a deep understanding of artificial intelligence. Kadashika
    Yammer Smiths
    This was very insightful on how LLM/chatbot responses are generated, and it reminded to always cross-check the results. Thanks!
    Lambda Winner
    @yammer_smiths now this is making me worried, what if chatgpt is inaccurate at claculating its betting predictions? ive always relied on it for my bets and stocks purchases.
    Yammer Smiths
    @lambda_winner Cross-checking ChatGPT's results is a safe practice, and you should try it out as well. It can always have some inaccuracies. Bets and stocks are both statistics, however, and I believe ChatGPT is adept at processing such data. Still, it's best to double check.
    Lambda Winner
    @yammer_smiths but i use chatgpt to make my life easier, then i have to check again? whats the point then? i'll just rely on its results for my darts world championship betting predictions, thanks.
    Yammer Smiths
    @lambda_winner I won't try to change your mindset, just know that you're doing it at your own risk. With that said though, here's something to help you with freespinsblog.
    Steven
    Thanks for the insight! Your article from Ask Kat tackles the intriguing question of why AI models sometimes hallucinate and generate inaccurate responses. It does shed light on how neural networks function as prediction machines and explores the complexities of how they predict words. It hints also at future discussions on improving AI accuracy, with mention of techniques like Retrieval-Augmented Generation. It's fascinating to delve into the inner workings of AI in a simple, understandable way. I am looking forward to more insights in upcoming editions of Deeper Learning! Thanks!