← Stream
articles

The issue with LLM hallucinations: knowledge as a missing fundamental

Will we ever solve hallucinations in LLMs?

(I posted this on LinkedIn quite a while ago — see the original post. But the question keeps returning. And I should play timpani more often.)

I’m really curious: can we ever ‘solve’ hallucinations if we don’t change the underlying concept of operations of an LLM? I don’t think so, to be honest. In most of my test runs with LLMs, I found that, as soon as an LLM runs short of data, it starts to confabulate stuff, rather than saying that it doesn't know.

And I guess it can’t tell you ‘I don’t know’, because that would require an absolute state of TRUE/FALSE. And an LLM can only approximate that point. Yes, we can train it to say ‘I don’t know’ whenever the probability is below a certain level, but that’s just not the same as having this as an inherent capability, is it?

Bit of a weird comparison, but for me, chasing truth in a LLM feels like having to tune a pair of timpani/kettle drums. Because of their parabolic shape, they don’t have a well-articulated fundamental pitch. So you have to tune them on their (inharmonic) overtones. Which only gives you a tuning by approximation. You suspect it’s there, but you can never quite catch it. Knowledge as a missing fundamental

I know, but hey, it’s Sunday :-) Got an opportunity to play timpani a while ago during an Easter concert. Nice bit of Mozart…pom-POM :)

Mycelium tags, relations & arguments