…without informed consent.
Every now and then I see a guy barging in a topic bringing nothing else than “I asked [some AI service] and here’s what it said”, followed by 3 paragraphs of AI-gened gibberish. And then when it’s not well received they just don’t seem to understand.
It’s baffling to me. Anyone can ask an AI. A lot of people specifically don’t, because they don’t want to battle with its output for an hour trying to sort out from where it got its information, whether it represented it well, or even whether it just hallucinated half of it.
And those guys come posting a wall of text they may or may not have read themselves, and then they have the gall to go “What’s the problem, is any of that wrong?”… Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up, and have only brought automated noise to the conversation.
Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up
That’s not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.
Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It’s just a smarter search engine with no ads and better focus on the question asked.
Ok, I didn’t need you to act as a middle man to tell me what the LLM just hallucinated, I can do this myself.
The point is that raw AI output provides absolutely no value to a conversation, and is thus noisy and rude.
When we ask questions on a public forum, we’re looking to talk to people about their own experience and research through the lens of their own being and expertise. We’re all capable of prompting an AI agent. If we wanted AI answers, we’d prompt an AI agent.
This is exactly something that has annoyed me in a sports community I follow back on Reddit. Posts with titles along the lines of “I asked ChatGPT what it thinks will happen in the game this weekend and here is what it said”.
Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.
We’ve also learned nothing about the OPs opinion on the matter, other than maybe that they don’t have one. And even more to the point, it’s so intellectually lazy that it just feels like karma farming. “Ya I have nothing to add but I do love me them updoots”.
I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.
But that argument always just feels overly combative for what is otherwise a pretty relaxed sports community. It’s just not worth having that fight there.
Blindsight mentioned!
The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.
This has been my biggest problem with it. It places a cognitive load on me that wasn’t there before, having to cut through the noise.
I think sometimes when we ask people something we’re not just seeking information. We’re also engaging with other humans. We’re connecting, signaling something, communicating something with the question, and so on. I use LLMs when I literally just want to know something, but I also try to remember the value of talking to other human beings as well.
The worst is being in a technical role, and having project managers and marketing people telling me how it is based on some chathpt output
Like shut the fuck up please, you literally don’t know what you are talking about