• finitebanjo@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    12 hours ago

    As we approach the theoretical error rate limit for LLMs, as proven in the 2020 research paper by OpenAI and corrected by the 2022 paper by Deepmind, the required training and power costs rise to infinity.

    In addition to that, the companies might have many different nearly identical datasets to try to achieve different outcomes.

    Things like books and wikipedia pages aren’t that bad, wikipedia itself compressed is only 25GB, maybe a few hundred petabytes could store most of these items, but images and videos are also valid training data and that’s much larger, and then there is readable code. On top of that, all user inputs have to be stored to reference them again later if the chatbot offers that service.