To clarify: I’m not talking about the popular conception of the Turing test as something like the Voight-Kampff test, meant to catch rogue AIs—but Turing’s original test, meant for AI designers to evaluate their own machines. In particular, I’m assuming the designers know their machine well enough to distinguish between a true inability and a feigned one (or to construct the test in a way that motivates the machine to make a genuine attempt).
And examples of human inabilities might be learning a language that violates the patterns of natural human languages, or engaging in reflexive group behavior the way starlings or fish schools do.


That doesn’t make any logical sense because even very young children are adept at pretending not to be human…
Like, I know what you’re trying to say, I’m just struggling to understand how you think it’s sensical
I would assume that, since humans sometimes pretend to not be human, that would simply be a subset of human behavior, and so what would make the comment make the most sense wouldn’t be “looking for behavior atypical for humans”, but rather " looking for behavior that humans arent able to engage in no matter how hard they try". What that would even be in a text based system though, I’m not sure. Typing impossibly fast maybe?
I was thinking of the example of syntax: the ability of LLMs to produce syntactic sentences is taken as evidence that they’re producing sentences the same way humans do, but LLMs can also (with training) produce sentences in artificial languages whose syntax is totally unnatural to humans.
Or take Ptolemy’s epicycles: their ability to imitate periodic motion that violates Kepler’s laws indicates that they don’t actually capture accurate physics.
I see what you’re saying but I think the problem is that you would need to test an AI while it’s unaware of being tested, or use a novel trick that it’s unaware of, to try and catch it producing non-human output.
If it’s aware that it’s being tested, then presumably it will try to pass the test and try to limit itself to human cognition to do so.
i.e. It’s possible that an AI’s intelligence includes enough human-like intelligence to completely mimic a human and pass a Turing test, but not enough to know to keep to those boundaries; but it’s also possible that it both knows enough to mimic us and enough to keep to our bounds, in which case the test then needs to be done in secret.
In the original Turning test, the black box isn’t the machine—it’s the human. The test is to see whether a (known) machine is an accurate model of an unknown system.
While the tester is blind as to which is which, the experimenter knows the construction of the machine and can presumably tell if it’s artificially constraining itself. When I say “the inability to act otherwise”, I’m assuming the experimenter can distinguish a true inability from an induced one (even if the tester can’t).
In the case of intelligences and neural networks that is not so straight forward. The humans and machines that are behind the curtain have to be motivated to try and replicate a human, or the test would fail, whether that’s because a human control is unhelpful or because the machine isn’t bothering trying to replicate a human.
In a Turing test, yes. What I’m suggesting is to change the motivation, to see if the machine fails like a human even when motivated not to.
deleted by creator
Literally the opposite of a turning test… Which it’s pretty clear you don’t understand those to begin with…
And has nothing to do with your post
Why are people upvoting that gibberish? Do they just don’t understand it and are blindly up voting?
The problem with the Turing test (like Ptolemy’s epicycles) is that the real unknown isn’t the machine being tested, but the system it’s supposed to be a model of.
A machine whose behavior is a superset of the target system isn’t a true model of the system.
You’ve edited this comment at least 3 times since I’ve replied, each time with more random shit that doesn’t make any sense. You just keep thumbing thru a thesaurus and replacing words with bigger words you clearly don’t understad.
This is probably why your posts/comments don’t make sense. Stop trying to sound intelligent and focus on communicating your point. But I don’t have the patience to ever try and explain anything to you again.
Best of luck.