• Subscribe
  • How do you address the hallucination problem for content generation?

    We all know that GPT & friends easily hallucinate and say untrue things, even when asked not to. For example, I was in a park in London and asked GPT4 to talk to me about things in such park, and it invented a pond in the center of it and made up (quite funny, actually) stories about people playing in it in the old days. Say that you want to build some sort of content generation tool leveraging AI, how would you address these issues? Hope to create a useful discussion!

    Replies

    David Deisadze
    Tons of articles on this! I’ve seen explicit instructions, fine tuning helps as well
    @david_deisadze For sure there is a lot of material, I agree. But has anyone really solved this issue? I mean, how can you be sure you can trust your model's output?
    David Deisadze
    @gabriele_mazzola interesting, would love to learn the answer to that question as well! I haven't dived deep into it, solved it partially for my app with explicit instructions.
    @david_deisadze Can you make some examples of "explicit instructions" that you used in your prompts? Could be very valuable!
    David Deisadze
    @gabriele_mazzola totally! It was something like "If you cannot answer, or there are no evidence of X, respond with a empty string." I spot tested it and it worked, nothing formal to validate
    @david_deisadze Ah ok, very similar to my attempts as well. Unfortunately in some cases it just kept lying to me, perhaps it depends on the complexity of the task at hand?