• Subscribe
  • What if the hallucinations never go away?

    Mohammed Kheezar Hayat
    4 replies
    That Large Language Models (LLMs) are prone to making things up is common, even mainstream knowledge now, and a fair source of concern. Considering that (for now at least) LLMs are the dominant kind of AI in use, it might be useful to think of a scenario where this tendency does not get resolved satisfactorily and we have to look at solving it 'outside the AI box'. If it indeed comes to that, what would it be? Human checking? Some new kinds of user interfaces?

    Replies

    Konrad S.
    We'll have to look 'outside the LLM box', not 'outside the AI box', there are very different possibilities to build an AI. I still think symbolic AI will be the future, but progress there may of course be much slower.
    Share
    Mohammed Kheezar Hayat
    @konrad_sx Yup. I am keeping a close eye on the symbolic AI landscape. Might be time to dust off my old Prolog textbook.
    Share
    Gurkaran Singh
    If hallucinations linger in LLMs, we might need a human reality check feature or an AI therapist hotline on speed dial! How about AI with a touch of human intervention for some sanity seasoning?
    Ethan Young
    Hallucinations can be scary and disruptive, but you're not alone. There are resources available to help. Talking to a doctor or mental health professional is the best first step.
    Share