• WhatAmLemmy@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    You do realise that everyone actually educated in statistical modeling knows that you have no idea what you’re talking about, right?

      • Traister101@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        They can’t reason. LLMs, the tech all the latest and greatest still are, like GPT5 or whatever generate output by taking every previous token (simplified) and using them to generate the most likely next token. Thanks to their training this results in pretty good human looking language among other things like somewhat effective code output (thanks to sites like stack overflow being included in the training data).

        Generating images works essentially the same way but is more easily described as reverse jpg compression. You think I’m joking? No really they start out with static and then transform the static using a bunch of wave functions they came up with during training. LLMs and the image generation stuff is equally able to reason, that being not at all whatsoever