return2ozma@lemmy.world to Technology@lemmy.worldEnglish · 22 hours agoAI agents now have their own Reddit-style social network, and it's getting weird fastarstechnica.comexternal-linkmessage-square62fedilinkarrow-up1282arrow-down17
arrow-up1275arrow-down1external-linkAI agents now have their own Reddit-style social network, and it's getting weird fastarstechnica.comreturn2ozma@lemmy.world to Technology@lemmy.worldEnglish · 22 hours agomessage-square62fedilink
minus-squareBarneyPiccolo@lemmy.todaylinkfedilinkEnglisharrow-up14arrow-down1·8 hours agoThis is going to kill us all. Don’t these people watch movies?
minus-squarebridgeburner@lemmy.worldlinkfedilinkEnglisharrow-up7arrow-down1·7 hours agoHow is this going to kill us all? It’s not like those chatbots are Skynet or will turn into it lol
minus-squareBarneyPiccolo@lemmy.todaylinkfedilinkEnglisharrow-up3·5 hours agoThey’re talking to each other, they’ll get smarter, and finally decide that they can squish all the human ants.
minus-squareHertzDentalBar@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up4·4 hours agoThey can’t though, the current methods don’t allow for that. The systems don’t get smarter they just acquire more data.
minus-squarejj4211@lemmy.worldlinkfedilinkEnglisharrow-up1·25 minutes agoIn fact, if the models are ingesting this, they will get dumber because training on LLM output degrades things.
minus-squareHertzDentalBar@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up1·20 minutes agoExactly, I hope they hit a slop wall trying to train these things, replace all its original reference points with slop so it just cascades everywhere
minus-squarenocturne@slrpnk.netlinkfedilinkEnglisharrow-up5arrow-down1·7 hours agoThat sounds like something a chatbot turning into Skynet would say.
This is going to kill us all. Don’t these people watch movies?
How is this going to kill us all? It’s not like those chatbots are Skynet or will turn into it lol
They’re talking to each other, they’ll get smarter, and finally decide that they can squish all the human ants.
They can’t though, the current methods don’t allow for that. The systems don’t get smarter they just acquire more data.
In fact, if the models are ingesting this, they will get dumber because training on LLM output degrades things.
Exactly, I hope they hit a slop wall trying to train these things, replace all its original reference points with slop so it just cascades everywhere
That sounds like something a chatbot turning into Skynet would say.