Summary:
-
Wikipedia tightens rules on AI, banning AI-generated content due to concerns about incorrect facts and fake references.
-
Editors cannot use large language models like ChatGPT to write or rewrite articles, but exceptions allow for translation and non-substantive edits.
-
Human editors still verify all changes, as AI is limited to support functions to prevent AI-generated content issues.
Wikipedia has officially tightened its rules on artificial intelligence, banning AI-generated content from being used to write or rewrite encyclopedia articles. This ruling is indicative of an increasing worry among editors that AI-generated content tends to contain incorrect facts, fake references, and language that does not comply with Wikipedia standards of neutrality and verifiability. There are however two exceptions limited in the ban.
The New Ban
In the new policy, editors cannot use large language models like ChatGPT or other AI-based models to write article text on Wikipedia entries. This involves producing completely novel articles or rephasing the already existing using AI-generated language, although the text can seem refined.
Why Wikipedia Acted
The volunteer editor community of Wikipedia reports that AI generated writing often introduces issues since language models have the ability to hallucinate facts, make up references, or even subtly alter meanings. Considering the fact that Wikipedia is based on verifiable sourcing, any minor errors can strike the credibility of this platform.
Exception One: Translation
ADVERTISEMENT
The former exception makes AI helpful in translating articles in one Wikipedia edition to another. To illustrate, an editor can rely on AI to assist in translating a French Wikipedia article into English- however, the human editor should confirm that the translation is correct, and it is true to the source material.
Exception Two: Non-Substantive Copy Edits
The second exception allows AI to assist in a limited amount of copyediting, such as grammar assistance, sentence smoothing, or spelling assistance. Such edits can only be made where the AI is not introducing new factual content and any recommendations to be made should be vetted by a human editor prior to publication.
Still Human Review
In both of these approved cases, AI is powerless. Human editors are still in charge of verifying all the changes to be made, as no misleading wording, distorted meaning, and unsupported claims are to be added to articles.
Change Was Voted By Editors
ADVERTISEMENT
The policy shift came after months of discussion in the Wikipedia editing community with the increase of low quality AI-generated submissions, and with a large majority of Wikipedia editors supporting the policy change. Most editors claimed that AI-generated articles were becoming difficult to detect and more harmful to the quality of the articles.
One Among a Series of Anti-AI Slop Pushes
The ban is the wider part of Wikipedia in combating what Wikipedia editors call AI slop, or mass-produced, low-quality AI text flooding the site. Over the past few months, Wikipedia, too, has increased the speed with which it deletes dubious AI-written pages.
Not An outright AI Repudiation
Wikipedia is not renouncing AI altogether. The platform continues to permit AI in strictly regulated support functions and in which humans are still in command. The policy will directly target the prevention of AI as the writer of encyclopedia knowledge.
Why It Matters
The decision of Wikipedia, which is among the most-used reference materials in the world, might impact the treatment of the AI-generated content by other knowledge sources. When it comes to factual public information, then it is clear that Wikipedia thinks that human judgment should still be at the center of the process.