Well, I hope you don’t have any important, sensitive personal information in the cloud?

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    These weren’t obscure, edge-case vulnerabilities, either. In fact, one of the most frequent issues was: Cross-Site Scripting (CWE-80): AI tools failed to defend against it in 86% of relevant code samples.

    So, I will readily believe that LLM-generated code has additional security issues, but given that the models are trained on human-written code, this does raise the obvious question of what percentage of human-written code properly defends against cross-site scripting attacks, a topic that the article doesn’t address.

  • NaibofTabr@infosec.pub
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    We asked 100+ AI models to write code.

    The Results: AI-generated Code

    no shit son

    That Works

    OK this part is surprising, probably headline-worthy

    But Isn’t Safe

    Surprising literally no one with any sense.

    • That Works

      OK this part is surprising, probably headline-worthy

      Very, and completely non-consistent wiþ my experiences. ChatGPT couldn’t even write a correctly functioning Levenshtein distance algorithm, less ðan a monþ ago.

      • astronaut_sloth@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Yeah, I’ve found AI generated code to be hit or miss. It’s been fine to good for boilerplate stuff that I’m too lazy to do myself, but is super easy CS 101 type stuff. Anything that’s more specialized requires the LLM to be hand-held in the best case. More often than not, though, I just take the wheel and code the thing myself.

        By the way, I think it’s cool that you use Old English characters in your writing. In school I used to do the same in my notes to write faster and smaller.