- 16 hours
First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy.
translation assistance
UnderpantsWeevil@lemmy.worldEnglish
12 hoursThe former I’m still looking sideways at.
The latter, probably the only truly benevolent use of LLMs. And even then, you’ll get plenty of grumbling.
- ThunderComplex@lemmy.todayEnglish11 hours
Eh I think this sounds ok. If you prompt an AI to improve your text, you submit that, and another human reviews that (and maybe asks you to make changes) it should be fine. I can see this giving more people the ability to make edits (e.g. non-native speakers)
- ZILtoid1991@lemmy.worldEnglish17 hours
There should be only one exception: In case someone needs an example of an AI-generated text.
UnderpantsWeevil@lemmy.worldEnglish
12 hoursLLMs are excellent tools for mapping one set of words and phrases to another, which is more or less exactly what you need out of a language translator.
- infeeeee@lemmy.zipEnglish2 days
Saved you a click:
After much debate, the new policy is in effect: Wikipedia authors are not allowed to use LLMs for generating or rewriting article content. There are two primary exceptions, though.
First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy. In other words, it’s being treated like any other grammar checker or writing assistance tool. The policy says, “ LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”
The second exemption for LLMs is with translation assistance. Editors can use AI tools for the first pass at translating text, but they still need to be fluent enough in both languages to catch errors. As with regular writing refinements, anyone using LLMs also has to check that incorrect information hasn’t been injected.
- Goodlucksil@lemmy.dbzer0.comEnglish1 day
To save you another few clicks: this is the discussion (RfC) that implemented the changes, and the policy is linked at the top.
- Rioting Pacifist@lemmy.worldEnglish2 days
AIbros: we’re creating God!!!
AI users: it can do translation & reformating pretty well but you got to check it’s not chatting shit
- halcyoncmdr@piefed.socialEnglish2 days
The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they’re asking anyway. All output needs to be verified before being used or relied upon.
The “AI” is just streamlining the process to save time.
Relying on it otherwise is stupid and just proves instantly that you are incompetent.
- rumba@lemmy.zipEnglish17 hours
This is absolutely the case, and honestly, at least for now how it needs to be across the board.
Noone should be using AI to do things you’re incapable of doing (or undoing).
- 7101334@lemmy.worldEnglish17 hours
Relying on it otherwise is stupid and just proves instantly that you are incompetent.
Relying on it in any circumstances (though medical stuff is understandable if you’re simply too poor or don’t have access) while it is exhausting water supplies and polluting the planet is stupid and instantly proves that you are stupid and inconsiderate.
- Zagorath@quokk.auEnglish1 day
the user needs to be smart enough to do whatever they’re asking anyway
I’m gonna say that’s ideal but not quite necessary. What’s needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It’s an easier skill to verify a result than it is to obtain that result. Think: how film critics don’t necessarily need to be filmmakers, or the P=NP question in computer science.
- Aralakh@lemmy.caEnglish13 hours
This is where domain expertise would come in, no? It’s speeding up the work but it usually outputs generic content, and whatever else it injects while hallucinating. Therefore the validation part holds up I’d say.
- Pyro@programming.devEnglish1 day
But if the output has issues, what’re you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI’s mistakes yourself.
- fartographer@lemmy.worldEnglish12 hours
If you’re unable to brute-force verification (research, testing, consulting the ancient texts), there’s where you stop what you’re doing, and take a breath. Then, consult an expert. Just like the film critic analogy, it’s easier to verify than to create, so you’re saving the expert time and effort while learning about something that you were obviously already passionate enough about to have started this endeavor.
- Zagorath@quokk.auEnglish1 day
At the risk of sounding like an overly obsequious AI… You know what, you’re completely right. I’m honestly not sure what use case I was imagining when I wrote that last comment.
- 12 hours
You were thinking logically about a normal production chain. In that case, QA or whoever says “This is wrong, rework it and correct the issue” and that’s that. With AI, it does the whole thing over again and may or may not come back with the same issue or an entirely new one.
Redjard@reddthat.comEnglish
1 dayMaking text flow naturally, grouping and ordeeing information, good writing.
You can verify two textst have the same facts and information, yet one reads way better than the other. But writing a text that reads well is quite hard.
- 1 day
I can’t draw, but I could probably photoshop out some minor issues in an AI-generated image.
Redjard@reddthat.comEnglish
1 dayIf you don’t habe the ability then you would do what you would have 5 years ago: not do it
Either submit without, or not submit at all.
- 2 days
Fucking hate those anti human filth pushing slop into everything. I want to take one apart with power tools.
- 17 hours
Yaaah, but I’ll need you to come in this weekend though. Yaaaahhhh…
- onlyhalfminotaur@lemmy.worldEnglish1 day
It holds up better than any movie from the late 90s that I can think of.
- XLE@piefed.socialEnglish2 days
I don’t think AI users would say it does reformatting either (if they’re honest): If you tell a chatbot to reformat text without changing it, it will change the text, because it does not understand the concept of not changing text. It should only take one time for someone to get burned for them to learn that lesson.
- 2 days
Seems pretty reasonable to use it as a grammar checker. As long as it’s not changing content, just form or readability, that seems like a pretty decent use for it, at least with a purely educational resource like Wikipedia.
- ji59@hilariouschaos.comEnglish2 days
So, it should be used reasonably, as it should have always been.
- 2 days
Liar. I already read the article before opening the comments. YOU SAVED ME NOTHING.
;-)
- errer@lemmy.worldEnglish2 days
Wikipedia probably wants to sell access to LLMs to train. It’s only valuable if Wikipedia remains a high-quality, slop-free source.
I think even AI zealots think there should be silos of content to train from that are fully human generated. Training slop on slop makes the slop even worse.
- Zagorath@quokk.auEnglish1 day
The content is CC licensed, but they are trying to block AI scraping because it overloads their servers. They have a paid API that uses a lot less compute for both Wikipedia and the AI, as well as being a revenue source for Wikipedia.
- ricecake@sh.itjust.worksEnglish15 hours
Yes, but…
https://en.wikipedia.org/wiki/Wikipedia%3ADatabase_download
That’s because viewing the page uses server resources, as done API access. If you want the data you can download the database directly.
- 2 days
This was only done because the editors pushed to minimize AI involvement. There’s a comment here already mentioning that: https://lemmy.world/comment/22826863
FauxPseudo @lemmy.worldEnglish
2 daysSeems like there should be a third exception. For those occasions where the article is about LLM generated text. They should be able to quote it when it’s appropriate for an article.
- Zagorath@quokk.auEnglish1 day
That is a reasonable exception to no-AI policies in research papers and newspaper articles, but not for Wikipedia. As a tertiary source, Wikipedia has a strict “no original research” policy. Using AI to provide examples of AI output would be original research, and should not be done.
Quoting AI output shared in primary and secondary sources should be allowed for that reason, though.
- ricecake@sh.itjust.worksEnglish15 hours
Eh, that’s not quite original research. There are plenty of other examples of images and sound files created for Wikipedia. A representative example isn’t research, it’s just indicating what something is.
The Wikipedia article on AI slop and generative AI has a few instances of content that’s representative to illustrate a sourced statement, as opposed to being evidence or something.
It’s similar to the various charts and animations.
- SpaceNoodle@lemmy.worldEnglish2 days
An extremely measured and level-headed response. Kudos to Wikipedia for maintaining high standards.
kazerniel@lemmy.worldEnglish
2 daysIt has to be said, they originally changed their stance due to the considerable editor pushback when they tried to introduce LLM summaries on the top of articles. So kudos to the editor community’s resistance! ✊
- ricecake@sh.itjust.worksEnglish14 hours
Just for more clarity: they workshoped for ideas on how to improve clarity and accessibility from some editors at an event. They did some small experiments, and they then developed a plan to trial some of them and presented the plan to a wider audience for feedback. After they got feedback they decided not to.
It’s not quite the editors pushing back on Wikipedia. Or rather, it’s not the “rebellion” people want to make it out to be.
https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries
It rubs me the wrong way when the process going how it should go gets cast as controversial and dramatic. Asking the community if you should do something and listening to them is how it’s supposed to go. It’s not resistance, it’s all of them being on the same team and talking.
kazerniel@lemmy.worldEnglish
14 hoursThanks for the reframe! From what I’ve seen in Village Pump comments at the time, editors (including me) were upset bc putting LLMs into Wikipedia articles seems like an idea so obviously clashing with Wikipedia’s values and strengths, that it was a shock to see it taken as far as it got before the wider backlash. (Also put into wider context, the whole world seemed to be jumping onto the LLM bandwagon at the time, so it was dismaying to see Wikipedia do the same.)
- banshee@lemmy.worldEnglish16 hours
Does anyone like LLM summaries in pages? This seems like a better fit for a browser extension to generate a summary on demand instead of wasting resources generating it for everyone. Google’s documentation is absolutely littered with the mess.
- SpaceNoodle@lemmy.worldEnglish2 days
Good point. The real strength of Wikipedia truly lies in the editors .
- 1 day
Wikipedia has banned AI-generated text,

… with two exceptions

Rose@slrpnk.netEnglish
13 hoursI was about to link to that, and specifically the stuff that now seems to have been moved to Signs of AI writing.
I thought that was a very interesting read, because it’s so much better than the usual AI ragebait that led to people getting pilloried over the fact that they actually know how to use em dashes. You can’t detect LLM use just by the fact that someone uses em dashes. It’s a complicated stylistic issue that usually boils down to “well, you know what ChatGPT output looks like when you see it”.
- 15 hours
Ok but surely there must be an automated way. You can’t throw manpower at this because they will loose
- Aatube@thriv.socialEnglish7 hours
actually the manual and volumetric https://en.wikipedia.org/wiki/Wikipedia:Recent_changes_patrol is ridiculously good
Rose@slrpnk.netEnglish
13 hoursThere are no reliable automated LLM output detectors. Anyone who says otherwise is either trying to sell you snake oil (or is unwittingly helping someone to sell snake oil to someone else, I guess).
- 12 hours
so the question still stands. how do they detect AI use? i am all for it btw. it is absolutely necessary but I am afraid it is impossible to do or implement.
- 2 days
I know at least one writing major who won an award from his volunteer work at Wikipedia. He did it as a hobby. They don’t really need AI, they need people like him.
- webp@mander.xyzEnglish2 days
Why do they need AI at all? Wikipedia had existed long before it and was doing fine.
- 2 days
You could make that argument about any tool Wikipedia editors use. Why should they need spellcheck? They were typing words just fine before.
…except it just makes it easier to spot errors or get little suggestions on how you could reword something, and thus makes the whole process a little smoother.
It’s not strictly necessary, but this could definitely be helpful to people for translation and proofreading. Doesn’t have to be something people are wholly reliant on to still be beneficial to their ability to edit Wikipedia.
- 2 days
Why should we use (insert tool) when we did just fine before?
Because when used correctly it can be great for helping you be more productive, and find errors/make improvements. The two exceptions are for grammar which AI does a surprisingly good job with. Would you have gotten mad if they used Grammarly >5 years ago? Having it rewrite an entire article is gonna be a bad idea, but asking it to rephrase a sentence, or check your phrasing for potential issues is a much safer thing. Not everyone who speaks Spanish uses it the same way. Some words are innocuous in some regions, but offensive in others.
You’re the one that implied it was.
- webp@mander.xyzEnglish2 days
Call me mad, call me crazy. AI shouldn’t be altering databases of knowledge, especially when it is so inconsistent. If there is a question on whether certain words are appropriate why can’t you ask another human being, they have forums for a reason, or someone else comes along and fixes it. Or look at a dictionary. The amount of energy spent for dubious information, holy. It’s not like there is a shortage of human beings on earth.
- Qwel@sopuli.xyzEnglish2 days
https://en.wikipedia.org/wiki/Wikipedia:Writing_articles_with_large_language_models
https://en.wikipedia.org/wiki/Wikipedia:LLM-assisted_translation
The two related “policies” are rather short, you should read them if you haven’t.
AI shouldn’t be altering databases of knowledge, especially when it is so inconsistent
The policy only allows usage as an auto-translater (a task at which they are not worst than old-style auto-translaters that were always allowed) and as spellcheck/grammarcheck (where it is also not worst than other allowed options).
None of those tools were previously seen as altering Wikipedia by themselves. The goal is that LLMs should be used and considered like they were.
To be clear they always were articles for creation submitted from clearly google-translated text, and they always were dismissed as slop. To get an autotranslated article accepted, you need to clean it up until all the information is correct and the grammar is good enough. This is a rather standard workflow for translations. The same thing should apply to LLMs.
The new issue here is that LLMs can “organically” change informations while asked to translate. When a classic autotranslate changes the information, it often (not always) leaves a notable mess in the grammar. LLMs will insert their errors much more cleanly. This is acknowledged by both texts and, well, texts will change if that becomes a reocurring issue.
- 1 day
AI isn’t altering databases or knowledge. AI is telling the writer there’s a better way to do this, and the writer has to explicitly change their wording.
You only know to look at a dictionary for alternative wordings if you know there’s a problem. How do you know there’s a problem?
If you ask someone else what if that same someone else uses your regional dialect and not the one that has problems? Your average writer can review every single word used in the dictionary for every single article they edit. But AI can, and that’s something it’s actually good at. You may only know 5 Spanish speakers, but AI knows everything it was trained on.
- Phoenixz@lemmy.caEnglish2 days
So in other words, when used responsibly as a tool with limitations, AI has it’s uses? Though very environmentally unfriendly uses?
- davidgro@lemmy.worldEnglish2 days
I hoped the exceptions would be like “Quoted example text of LLM output, when it’s clearly labeled and styled separately from the article text.”
baltakatei@sopuli.xyzEnglish
1 dayThat exception probably would be twisted into permission to add an “AI summary” section to each article.
- albert_inkman@lemmy.worldEnglish22 hours
This is actually fascinating from a discourse perspective. The RfC mentions that AI detectors are unreliable, which is the whole problem.
I work on mapping public opinion across thousands of responses using AI as a tool to find patterns, not to detect individual writers. The difference matters.
We can detect patterns across a corpus without needing to prove any single person wrote it. That scale of analysis is what lets us see where opinion clusters, not just label individual posts.
Wikipedia’s ban is probably the right call for their use case. They need verifiable authorship for accountability. But we shouldn’t conflate that with not being able to use AI for understanding large-scale discourse.
The Velour Fog @lemmy.worldEnglish
20 hoursYou’re not working on anything, clanker.
For those wondering, check the timestamps this accounts comment history, especially comments from 4 days ago or longer. Fully formatted multi-paragraph comments made 10-30 seconds apart. This is an LLM-controlled account.
- luciferofastora@feddit.orgEnglish16 hours
I can’t even write a two-sentence comment in 30s without overthinking. I do like to use formatting, but that doesn’t make it quicker…
- Echo Dot@feddit.ukEnglish17 hours
Yeah you can tell because the comment doesn’t really say anything. It’s just a lot of text but no actual meaning.
The Velour Fog @lemmy.worldEnglish
16 hoursYup, one of the main hallmarks of AI generated slop that’s often hard to explain unless you have an example like the above in front of you. A lotta words, but very little substance.
- hperrin@lemmy.caEnglish1 day
Good news. Hopefully they’ll get rid of those two exceptions in the future.
JohnEdwa@sopuli.xyzEnglish
1 dayWould be pretty shitty to make sure every time you are editing Wikipedia to disable any AI based grammar/spellcheckers (e.g Grammarly), and not being allowed to use translation tools.
Because those are the two exceptions.
- antonim@lemmy.worldEnglish1 day
Spell- and grammar-checking is useless anyway. If you don’t have at least one word underlined with red in every sentence, you’re not writing anything intellectually serious. 🧐
- Warl0k3@lemmy.worldEnglish1 day
Spelling/grammar checking and machine translation have been in use for decades on wikipedia, the only difference is that AI has improved the usefulness of the tools for first-pass editing. I don’t believe the policy has even changed - you still had to be fluent in the language if you were using the old style MTL tools, too.
Aside from generating videos of young girls with gigantic titties, this is the only thing generative AI is actually useful for.
- hperrin@lemmy.caEnglish17 hours
I still think it should be banned. It’s prone to just making shit up. Therefore, it’s not useful for any sort of professional work. If you had just a guy named Al, who would work for free, but sometimes would just make stuff up to make you happy, would you let Al work on important things?

















