rnercle@sh.itjust.worksEnglish
1 yearmistral’s ai is shit. It’s not hallucinating, it’s trying to deceive.
zutto@lemmy.fedi.zutto.fiEnglish
1 yearAll models hallucinate, it’s just how language models work.
Do you have sources for this claim that Mistral’s models are trying to deceive anyone?
rnercle@sh.itjust.worksEnglish
1 yearAll models hallucinate, it’s just how language models work.
yes, i know. I’m ok with hallucinations.
Do you have sources for this claim that Mistral’s models are trying to deceive anyone?
source is me and a chat i had with “le chat” a couple of days ago. I wanted to test it’s capabilities, so I asked it to invent a joke. It copy-pasted from reddit everytime! I pointed that out, i asked it to stop using reddit as source. It kept excusing itself and giving me reddit jokes while claiming that they’re genuine “never heard before” jokes. I call that “deception” and not “hallucination”.
zutto@lemmy.fedi.zutto.fiEnglish
1 yearUmm, that is quite literally hallucinations what you are describing? Am I missing something here?
rnercle@sh.itjust.worksEnglish
1 yeari would call a hallucination, seeing what’s not “there”. Copying jokes from reddit is not hallucinating.
good or bad, inventing new jokes would need the ability to “hallucinate”



