To clarify: I’m not talking about the popular conception of the Turing test as something like the Voight-Kampff test, meant to catch rogue AIs—but Turing’s original test, meant for AI designers to evaluate their own machines. In particular, I’m assuming the designers know their machine well enough to distinguish between a true inability and a feigned one (or to construct the test in a way that motivates the machine to make a genuine attempt).

And examples of human inabilities might be learning a language that violates the patterns of natural human languages, or engaging in reflexive group behavior the way starlings or fish schools do.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    Literally the opposite of a turning test… Which it’s pretty clear you don’t understand those to begin with…

    And has nothing to do with your post

    Why are people upvoting that gibberish? Do they just don’t understand it and are blindly up voting?

    • AbouBenAdhem@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      2 days ago

      The problem with the Turing test (like Ptolemy’s epicycles) is that the real unknown isn’t the machine being tested, but the system it’s supposed to be a model of.

      A machine whose behavior is a superset of the target system isn’t a true model of the system.