To clarify: I’m not talking about the popular conception of the Turing test as something like the Voight-Kampff test, meant to catch rogue AIs—but Turing’s original test, meant for AI designers to evaluate their own machines. In particular, I’m assuming the designers know their machine well enough to distinguish between a true inability and a feigned one (or to construct the test in a way that motivates the machine to make a genuine attempt).
And examples of human inabilities might be learning a language that violates the patterns of natural human languages, or engaging in reflexive group behavior the way starlings or fish schools do.


I would assume that, since humans sometimes pretend to not be human, that would simply be a subset of human behavior, and so what would make the comment make the most sense wouldn’t be “looking for behavior atypical for humans”, but rather " looking for behavior that humans arent able to engage in no matter how hard they try". What that would even be in a text based system though, I’m not sure. Typing impossibly fast maybe?