cross-posted from: https://infosec.pub/post/24994013
CJR study shows AI search services misinform users and ignore publisher exclusion requests.
cross-posted from: https://infosec.pub/post/24994013
CJR study shows AI search services misinform users and ignore publisher exclusion requests.
Only yesterday, I searched for a very simple figure, the number of public service agents in a specific administrative region. This is, obviously, public information. There is a government site where you can get it. However I didn’t know the exact site, so I searched for it on Google.
Of course, AI summary shows up first, and gives me a confident answer, accurately mirroring my exact request. However the number seems way too low to me, so I go check the first actual search result, the aforementioned official site. Google’s shitty assistant took a sentence about a subgroup of agents, and presented it as the total. The real number was clearly given before, and was about 4 times that.
This is just a tidbit of information any human with the source would have identified in a second. How the hell are we supposed to trust AI for complex stuff after that?
The AI models can be hilariously bad even on their own terms.
Yesterday I asked Gemini for a population figure (because I was too lazy to look it up myself). First I asked it:
It answered:
On a whim, I asked it again as:
And then it gave me the answer sweet as a nut.
Apparently I was being too polite with it, I guess?