• REDACTED@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Fancy autocorrect? Bro lives in 2022

    EDIT: For the ignorant: AI has been in rapid development for the past 3 years. For those who are unaware, it can also now generate images and videos, so calling it autocorrect is factually wrong. There are still people here who base their knowledge on 2022 AIs and constantly say ignorant stuff like “they can’t reason”, while geniuses out there are doing stuff like this: https://xcancel.com/ErnestRyu/status/1958408925864403068

    EDIT2: Seems like every AI thread gets flooded with people with showing age who keeps talking about outdated definitions, not knowing which system fits the definition of reasoning, and how that term is used in modern age.

    I already linked this below, but for those who want to educate themselves on more up to date terminology and different reasoning systems used in IT and tech world, take a deeper look at this: https://en.m.wikipedia.org/wiki/Reasoning_system

    I even loved how one argument went “if you change underlying names, the model will fail more often, meaning it can’t reason”. No, if a model still manages to show some success rate, then the reasoning system literally works, otherwhise it would fail 100% of the time… Use your heads when arguing.

    As another example, but language reasoning and pattern recognition (which is also a reasoning system): https://i.imgur.com/SrLX6cW.jpeg answer; https://i.imgur.com/0sTtwzM.jpeg

    Note that there is a difference between what the term is used for outside informational technologies, but we’re quite clearly talking about tech and IT, not neuroscience, which would be quite a different reasoning, but these systems used in AI, by modern definitions, are reasoning systems, literally meaning they reason. Think of it like Artificial intelligence versus intelligence.

    I will no longer answer comments below as pretty much everyone starts talking about non-IT reasoning or historical applications.

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      You do realise that everyone actually educated in statistical modeling knows that you have no idea what you’re talking about, right?

        • Traister101@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          They can’t reason. LLMs, the tech all the latest and greatest still are, like GPT5 or whatever generate output by taking every previous token (simplified) and using them to generate the most likely next token. Thanks to their training this results in pretty good human looking language among other things like somewhat effective code output (thanks to sites like stack overflow being included in the training data).

          Generating images works essentially the same way but is more easily described as reverse jpg compression. You think I’m joking? No really they start out with static and then transform the static using a bunch of wave functions they came up with during training. LLMs and the image generation stuff is equally able to reason, that being not at all whatsoever