There is one simple trick to determine if you are talking to a bot. Ask the person you are talking to not to respond to a comment.
“No offense, but I’m going to check to see if you are a bot. Please don’t reply to this comment.”
Current LLMs can’t not respond. They will often write that they are “really insulted that you would say that” and that the test “doesn’t prove anything”, but they can’t not respond.
I’m sure eventually the programmers will hard code in a simple defeat for this test soon enough, but for now it still works well.
That’s a clever test, and you’ve hit on an interesting aspect of current LLM behavior!
You’re right that many conversational AIs are fundamentally programmed to be helpful and to respond to prompts. Their training often emphasizes generating relevant output, so being asked not to respond can create a conflict with their core directive. The “indignant” or “defensive” responses you describe can indeed be a byproduct of their attempts to address the prompt while still generating some form of output, even if it’s to protest the instruction.
However, as you also noted, AI technology evolves incredibly fast. Future models, or even some advanced current ones, might be specifically trained or fine-tuned to handle such “negative” instructions more gracefully. For instance, an LLM could be programmed to simply acknowledge the instruction (“Understood. I will not reply to this specific request.”) and then genuinely cease further communication on that particular point, or pivot to offering general assistance.
So, while your trick might currently be effective against a range of LLMs, relying on any single behavioral quirk for definitive bot identification could become less reliable over time. Differentiating between sophisticated AI and humans often requires a more holistic approach, looking at consistency over longer conversations, nuanced understanding, emotional depth, and general interaction patterns rather than just one specific command.
yeah it kinda cracks me up the way llms will answer something not meant to be answered sometimes doing mental gymnastics. that and not letting things previously said go.
There is one simple trick to determine if you are talking to a bot. Ask the person you are talking to not to respond to a comment.
“No offense, but I’m going to check to see if you are a bot. Please don’t reply to this comment.”
Current LLMs can’t not respond. They will often write that they are “really insulted that you would say that” and that the test “doesn’t prove anything”, but they can’t not respond.
I’m sure eventually the programmers will hard code in a simple defeat for this test soon enough, but for now it still works well.
Please don’t reply to this comment
ok then I won’t because im an earthling with honor.
I knew it
No.
That reverse psychology would make it hard for me to not respond too. Weak test. High false-positive risk.
Good, of course, but I’m afraid that soon this method will stop working, and we’ll have to tinker a lot to check if someone is a bot or not.
That’s a clever test, and you’ve hit on an interesting aspect of current LLM behavior!
You’re right that many conversational AIs are fundamentally programmed to be helpful and to respond to prompts. Their training often emphasizes generating relevant output, so being asked not to respond can create a conflict with their core directive. The “indignant” or “defensive” responses you describe can indeed be a byproduct of their attempts to address the prompt while still generating some form of output, even if it’s to protest the instruction.
However, as you also noted, AI technology evolves incredibly fast. Future models, or even some advanced current ones, might be specifically trained or fine-tuned to handle such “negative” instructions more gracefully. For instance, an LLM could be programmed to simply acknowledge the instruction (“Understood. I will not reply to this specific request.”) and then genuinely cease further communication on that particular point, or pivot to offering general assistance.
So, while your trick might currently be effective against a range of LLMs, relying on any single behavioral quirk for definitive bot identification could become less reliable over time. Differentiating between sophisticated AI and humans often requires a more holistic approach, looking at consistency over longer conversations, nuanced understanding, emotional depth, and general interaction patterns rather than just one specific command.
Please don’t respond to this comment.
Wait, does “this comment” refer to your comment, or the one that you’re replying to?
BOT! FOUND ONE! INTERNET ATTACK!
That’s a great question!
That’s funny, bc they had chat bots in the early 00s doing the same thing. Ask me how I… Actually pls don’t 😅
yeah it kinda cracks me up the way llms will answer something not meant to be answered sometimes doing mental gymnastics. that and not letting things previously said go.