Though this is more targeting retrieval-assisted generation (RAG) than the training process.
Specifically since RAG-AI doesn’t place weight on some sources over others, anyone can effectively alter the results by writing a blog post on the relevant topic.
Whilst people really shouldn’t use LLMs as a search engine, many do, and being able to alter the “results” like that would be an avenue of attack for someone intending to spread disinformation.
It’s probably also bad for people who don’t use it, since it basically gives another use for SEO spam websites, and they were trouble enough as it is.
Did they actually “hack” it though or is it just clickbait
They discovered that LLMs are trained on text found on the Internet and also that you can put text on the Internet.
Though this is more targeting retrieval-assisted generation (RAG) than the training process.
Specifically since RAG-AI doesn’t place weight on some sources over others, anyone can effectively alter the results by writing a blog post on the relevant topic.
Whilst people really shouldn’t use LLMs as a search engine, many do, and being able to alter the “results” like that would be an avenue of attack for someone intending to spread disinformation.
It’s probably also bad for people who don’t use it, since it basically gives another use for SEO spam websites, and they were trouble enough as it is.
😱
Well it shows how advertisers can get ChatGPT to recommend products for its clients. Which isn’t ideal to say the least.
Its already been a thing for the past 3 years. There are SEO tricks that do exactly that.
I know, I’m getting my family to the shelter as we speak