Wikipedia talk:WikiProject AI Cleanup

Page contents not supported in other languages.
Source: Wikipedia, the free encyclopedia.
WikiProject iconAI Cleanup
WikiProject iconThis page is within the scope of WikiProject AI Cleanup, a collaborative effort to clean up artificial intelligence-generated content on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.

Suggest "must-visit" as an AI catchphrase

Hello! The AI catchphrases list is a great idea, and based on the article it just drew my attention to I'd like to suggest putting "must-visit" and "must-see" on your list too. AI seems to love those and they're definitely not encyclopedic. Thanks for the useful work you're doing! ~ L 🌸 (talk) 05:18, 5 December 2023 (UTC)[reply]

Agree w/ @LEvalyn: That's how I found Hamsaladeevi [1]. Est. 2021 (talk · contribs) 13:05, 5 December 2023 (UTC)[reply]
Added both, thanks! ARandomName123 (talk)Ping me! 13:35, 5 December 2023 (UTC)[reply]
I've also found "stunning natural beauty" to be quite a common tell. It really does like sounding like a bad travel blog... Andrew Gray (talk) 23:12, 5 December 2023 (UTC)[reply]
Added as well, thanks! I've noticed they seem to use "stunning" a lot when describing places, but that by itself contains too many false positives. ARandomName123 (talk)Ping me! 14:56, 6 December 2023 (UTC)[reply]
Agreed. "In conclusion..." is a similar tell to this I feel - lots of false positives for the phrase, but when a GPTed section appears, it really sticks out like a sore thumb. Andrew Gray (talk) 01:15, 7 December 2023 (UTC)[reply]
Yes, I think the tell for the final paragraph isn't any particular phrase so much as it is "Conclusion phrase, followed by a brief paragraph." You know it when you see it. Looks like an undergraduate exam paper. -- asilvering (talk) 02:09, 8 December 2023 (UTC)[reply]

Model for Emulating Wikipedia Articles.

 – fixed link -- Maddy from Celeste (WAVEDASH) 19:26, 6 December 2023 (UTC)[reply]
Thank you! Terribilis11 (talk) 19:37, 6 December 2023 (UTC)[reply]

Hello, I'm part of a research project as part of Stanford's OVAL. We are studying building tools that are factually grounded which I'm sure you can imagine is quite a challenge. We have built a model that appears to be relatively accurate and are hoping for Wikipedia Collaborators to participate in evaluation. We have built a UI tool to display a human written article and an article from our model and would score both. The UI tool has been built to streamline the evaluation process, even including the snippets of cited sources relevant. We have monetary compensation available for participants.

While none of the articles produced by our model are intended to be published There is potential for the tool to be integrated as part of

Wikipedia:New Pages Patrol efforts, perhaps as a comparison between draft articles our the models outputs to see where improvement could be necessary. There is more information in our m:Research:Wikipedia type Articles Generated by LLM (Not for Publication on Wikipedia)
Talk area.

If you are interested please fill out this form. https://docs.google.com/forms/d/e/1FAIpQLSfaivclenvs9pdnW7cFcsTyvYy-wSCR_Vr_oYzJx_2bm-ZAqA/viewform?usp=sf_link

We are beginning Evaluation currently so potentially only earlier responders will be able to participate as funding is limited.

Thank you Terribilis11 (talk) 19:13, 6 December 2023 (UTC)[reply]

Thanks a lot for this project! This sounds very interesting indeed, and we would be glad to collaborate with your project if needed. ChaotıċEnby(t · c) 20:47, 9 December 2023 (UTC)[reply]

Past AI-generated content debacle in Wikiproject Video games

Back in August, there was an event where an editor over at

WP:VG generated 24 articles entirely with AI. Some of these were deleted entirely, but the majority were redirected with still accessible page histories, and around two articles still stand now (though trimmed). Only one article has been completely rewritten and repaired, and that's Cybermania '94
. The editor in question was also blocked.

This incident may be something worth noting somewhere in this project, whether to have more examples of AI generated content, to reconstruct articles that formerly used AI from the ground up, or whatever other reason. NegativeMP1 01:58, 7 December 2023 (UTC)[reply]

Update: Make that two, Stick Shift (video game) just got recreated without the usage of AI. NegativeMP1 17:04, 7 December 2023 (UTC)[reply]

User warnings

If you find a AI-using editor, make sure to warn them with {{subst:uw-ai1}}, which should be coming to Twinkle soon. Ca talk to me! 00:05, 13 December 2023 (UTC)[reply]

 You are invited to join the discussion at Wikipedia talk:Large language model policy#RFC, which is within the scope of this WikiProject. Queen of Hearts ❤️ (no relation) 22:31, 13 December 2023 (UTC)[reply]

Untitled

Moved from
Wikipedia talk:WikiProject AI Cleanup/AI images in non-AI contexts
 – QueenofHearts 23:02, 15 January 2024 (UTC)[reply
]

currently the page Artificial planet uses an AI image.

(by the way, if there's a better place to bring things like this to attention, please let me know; this is the first wikiproject i've been apart of and i am inexperienced.) EspWikiped (talk) 15:44, 20 December 2023 (UTC)[reply]

Thanks, updated! 3df (talk) 19:32, 20 December 2023 (UTC)[reply]

Was this article created by AI?

https://en.wikipedia.org/w/index.php?title=Poverty_in_Turkey&oldid=986832491

I am suspicious of the many offline references and further reading. But the author has been blocked so I suppose no point asking them. I don’t know much about Chat GPT etc. Is there a formal investigation process to look at all the other stuff created by User:Torshavn1337 and their sockpuppets? I only intend to fix Poverty in Turkey (no need to delete article as subject is notable) not any other articles such as Foreign relations of Turkey. Wikipedia:WikiProject Turkey seems pretty moribund so I think I would be wasting my time asking them anything. Any ideas? Chidgk1 (talk) 11:14, 24 December 2023 (UTC)[reply]

Driveby comments: The tone of this article strikes me as awkward, but not AI-generated; if was AI-generated it wasn't a major LLM. Courtesy ping: 3df, who is more experienced on this. I don't have time to check the references. There also isn't an official investigation process (yet) but here works fine. Queen of Hearts ❤️ (she/they 🎄 🏳️‍⚧️) 23:33, 24 December 2023 (UTC)[reply]
I think this was actually originally a copyright violation of this report, but with the sources scrambled in some random order. The article is likely too early to be AI, which wouldn't have been that coherent at the time. 3df (talk) 01:41, 25 December 2023 (UTC)[reply]
This is a great point & checks out for why there were so many completely unlinked sources.
talk 01:52, 25 December 2023 (UTC)[reply
]
Ah I see thanks. I was wondering why all the sources were from 2016 and before when the article was created in 2020. Chidgk1 (talk) 12:57, 26 December 2023 (UTC)[reply]
I'm inclined to agree with QoH, but I agree that the article is suspicious nonetheless. The sources certainly need to be checked.
talk 23:53, 24 December 2023 (UTC)[reply
]
I don't think so, it strikes me more as poorly written. It can be cleaned up in due time. TheBritinator (talk) 00:02, 25 December 2023 (UTC)[reply]

Templates for discussion

The templates

talk 02:06, 25 December 2023 (UTC)[reply
]

Can these phrases really be used to identify AI-generated content?

I have some doubts that most of the phrases at Wikipedia:WikiProject_AI_Cleanup/AI_Catchphrases are useful for identifying AI-generated content. As a test, I clicked on the first link (stand as a testament) and opened the first 3 pages (Domenico Selvo, Chifley Research Centre, and Apollo (dog)). In each case, the catchphrase was already present in 2021 (see [2], [3], and [4]), i.e. before the official release of all the main LLMs today. So it is very unlikely that the phrases in these articles were created using AI.

Another reason for doubt is that AI output is based on the frequency of formulations used in the training set. Since Wikipedia is a big part of the training set, any phrases that are frequently used on Wikipedia may also be frequently used in AI output.

There may be some rather obvious phrases useful to identify AI content, such "As a large language model, I...", "As an AI language model, I...", and the like. But most of the phrases listed here do not fall into that category. Phlsph7 (talk) 08:28, 25 December 2023 (UTC)[reply]

There were far more good examples in these search results a month ago, but everyone's been doing a great job of cleaning it all up and leaving the acceptable stuff. Those searches might not have any problematic results left. 3df (talk) 16:49, 25 December 2023 (UTC)[reply]
In that case, it might be best to remove the phrases. The page gives the impression that these phrases can be used as an easy and reliable way to identify AI-generated contents. Since the great majority of the search results are false positives, this is likely to do more harm than good. Except for the obvious phrases mentioned before, I don't think there are any catchphrases that could be used to reliably identify AI-generated contents. Phlsph7 (talk) 17:03, 25 December 2023 (UTC)[reply]
Yes, I think it's time to put these away. A written guide to finding AI content would be better. I'll get a start on it. 3df (talk) 20:04, 25 December 2023 (UTC)[reply]
That sounds like a good idea. You should probably mention made-up references and obvious hallucinations, like events that never took place. Editor behavior could be another factor, such as when a high number of substantial content additions are made in significantly less time then it would take to type them. But generally speaking, I think AI involvement is very difficult to detect and online detectors are far to unreliable to be of use. Phlsph7 (talk) 21:01, 25 December 2023 (UTC)[reply]

I appreciate the effort in trying to help editors identify ChatGPT responses but I'm not sure that the recent adjustments solve the problem. Depending on the prompt used, the responses can have all kinds of linguistic problems or none at all. For example, I used the prompt write a wikipedia article on the topic "Metaphysics" and got the following result:

ChatGPT response

Metaphysics is a branch of philosophy that explores the fundamental nature of reality, including the relationship between mind and matter, substance and attribute, potentiality and actuality. The word "metaphysics" comes from two Greek words that, together, literally mean "after physics". The name was given c.70 B.C.E. by Andronicus Rhodus, the editor of the works of Aristotle, because in his list of Aristotle's works, the Physics comes before the works dealing with metaphysics.

Overview

Metaphysics attempts to answer two basic questions in the broadest possible terms:

1. "What is there?" 2. "What is it like?"

A person who studies metaphysics is called a metaphysicist or a metaphysician. The metaphysician tries to clarify the fundamental notions by which people understand the world, including existence, objects and their properties, space and time, cause and effect, and possibility.

A central branch of metaphysics is ontology, the investigation into what types of things there are in the world and what relations these things bear to one another. The metaphysician also attempts to clarify the notions by which people understand the world, including existence, objecthood, property, space, time, causality,

After a first initial look at the response, I don't think it has any of the "typical" problems discussed here. My suggestion would be to be very careful with any concrete guides on how to identify AI output. It might also be a good idea to follow reliable sources concerning how to identify it rather than presenting our personal research as a definite guide. I assume many editors have very little background knowledge on LLMs so we should not give them the false impression that there are generally accepted methods for identifying LLM output. Phlsph7 (talk) 08:57, 26 December 2023 (UTC)[reply]

Yeah, there aren't any definite method to identify LLM output, and the best detectors will always lag months or years behind the LLMs themselves (in a very crude way, it can be seen as similar to how GAN work). Of course, there are a few words that make it 100% certain that a LLM wrote it (e.g. As of my last knowledge update in January 2022), but there isn't any criterion or tool that can reliably decide both ways (and, since LLMs can get closer to human speech than the variance inside each group, and text can't be easily watermarked like images, it's likely there won't be anytime soon). ChaotıċEnby(t · c) 10:22, 26 December 2023 (UTC)[reply]
The stuff I'd written about so far are problems we keep seeing exhaustively in practice. The list is turning out more like a "what do AI edits usually do incorrectly that need to be fixed" than a "how can you tell if text was written by AI" guide. I can add wording to clarify that, and also that we can't trust those detectors. Several examples for each section would be very helpful, but I'm really not looking forward to sifting through the hundreds of AI diffs for them. 3df (talk) 20:41, 26 December 2023 (UTC)[reply]
I think it's a good idea to have a guide on what editors are supposed to do once they have identified AI-generated text even if the instructions cannot be used to identify whether a text is AI-generated.
By the way, I added a brief explanation of some of the points discussed here to project page. Phlsph7 (talk) 12:48, 27 December 2023 (UTC)[reply]

Proposal: adopting
WP:ADVICEPAGE
This would entail a move to Wikipedia:WikiProject AI Cleanup/Large language models. The page would be tagged with Template:WikiProject advice. It would be, in some way, prominently linked from the project's main page. I further suggest some rearrangement of content on that page and the project's main page, namely, the section Wikipedia:Large language models § Handling suspected LLM-generated content could be merged with the related content on the project's main page (Wikipedia:WikiProject AI Cleanup#Editing advice and most of the templates listed in the "Templates" section). The "See also" section could be combined with Wikipedia:WikiProject AI Cleanup § Resources on the main page. The advice page would therefore consist of the first two sections of WP:LLM: "Risks and relevant policies" and "Usage".

The motive behind this proposal is keeping things coherent and avoiding duplication. —Alalch E. 00:40, 8 January 2024 (UTC)[reply]

I like the idea of keeping things coherent and avoiding duplication. One possible concern would be that the purposes of
WP:LLM and WikiProject AI Cleanup are not identical. The purpose of the cleanup project is more narrow since it is mainly concerned with cleaning up problems created by AI-assisted contributions. The purpose of the essay is wider since, in addition to that, it contains advice on how LLMs can be used productively and how to avoid some of its pitfalls in the process. Phlsph7 (talk) 09:41, 10 January 2024 (UTC)[reply
]
I am concerned that some things like Every edit that incorporates LLM output should be marked as LLM-assisted by identifying the name and, if possible, version of the AI in the edit summary. This applies to all namespaces. is worded as if it was policy, but it is not. And In biographies of living persons, such content should be removed immediately—without waiting for discussion, or for someone else to resolve the tagged issue. is actually not supported by policy. If you are reverting content exclusively because you think it is AI-generated and you have no specific concern about accuracy, sourcing, or copyright violations, then that revert goes against policy. MarioGom (talk) 11:12, 11 January 2024 (UTC)[reply]
Yes, actually, that paragraph was intended to mean that non-policy compliant LLM-generated BLP content should be removed, specifically, not just any LLM-originated content, which I have clarified in this edit.—Alalch E. 17:54, 12 January 2024 (UTC)[reply]

"Conclusion" sections in AI generated content - one caught in the wild here?

Hi all,

First of all: I am waaaaay out of my depth there, and my apologies if this goes nowhere - fine with that. Please see pretty any much of my contributions where I poke fun at myself for being a "

Sysop
" who doesn't actually understand how the internet works.

It would appear to me that there are any number of AI "conclusions" or "summary" generators out there in the wild.

Please see this for context.

Shirt58 (talk) 🦘 09:55, 15 January 2024 (UTC)[reply]

Yep, the whole draft you point to appears to be very ChatGPT-like. The key things are the "Book Title: Subtitle" style in the first section, which ChatGPT nearly invariably generates, but also having a plan-like structure with many short subsections restating their title in one or two fluffy sentences (a product of formatting to Wikipedia the bullet lists of "key points" that ChatGPT generates), and of course the "Conclusion: blahblah" last part which you aptly found. Unfortunately, tools to detect whether a text is AI or not are often less than reliable (if not completely unreliable), as they lag months or even years behind the generative LLMs themselves. ChaotıċEnby(talk · contribs) 10:13, 15 January 2024 (UTC)[reply]
Would be great if there were a reliable tool to check these with; I use this GPT-2 Output Detector Demo, and it must be an AI shill because it always thinks everything is fine and nothing is AI-generated.
Would be even better if such a tool were easily accessible via one of the common toolsets, for use in AfC/NPP work. -- DoubleGrazing (talk) 11:18, 15 January 2024 (UTC)[reply]
Unfortunately, GPT-2 tools aren't too reliable given that most stuff generated from GPT today is from GPT-3.5 (including ChatGPT) or even GPT-4 (a completely different model). The sad reality is that, for now, LLM detectors have had to play catch-up with generative LLMs, in a way reminiscent of what happens inside
generative adversarial networks (although I don't think generative LLMs use LLM detectors in their training, but their rate of improvement is nonetheless high enough for the effect to be similar).
And this is one of the reasons we're here as a project – to build such a tool where none exists before (at least in the more specific, and likely much easier, Wikipedia use case), to assist us with this in the future! ChaotıċEnby(talk · contribs) 12:07, 15 January 2024 (UTC)[reply
]

AI-generated imagery

This might be me, but should we be using AI-generated imagery in articles unrelated to artificial intelligence? — 

talk) 21:53, 15 January 2024 (UTC)[reply
]

We shouldn't, no. On top of the ethical concerns, there's the issue that AI art is often pretty inaccurate, while misleading the user into thinking it is a real photograph or illustration. We have Wikipedia:WikiProject AI Cleanup/AI images in non-AI contexts to deal with these cases. ChaotıċEnby(talk · contribs) 22:09, 15 January 2024 (UTC)[reply]
It depends on the case, there are lots of articles where an illustration made using AI could be very valuable and appropriate, given that it doesn't have misgeneration issues and is clearly labeled as made using AI.
Once there is a better image it can still be replaced and it shouldn't replace but complement existing images. If there was no image showing how the art style cubism looked like an AI-made image would be useful and better than no image. It's a tool and people are also adding images made or modified using the tool Photoshop to articles sometimes when that's due. Prototyperspective (talk) 17:32, 21 January 2024 (UTC)[reply]
talk) 17:40, 21 January 2024 (UTC)[reply
]
I know that very well since I even created the Wikimedia Commons category for that. Illustrations and artworks are very much missing. Those two things are not mutually exclusive. I would very much support and welcome better interfacing between editors / people who know which images are missing and people who have the artistic skills to implement any of the requested illustrations. These I have tried to so earlier listing many science-related images that are missing even in very popular articles of major subjects. AI software are very useful tools to close visualization gaps and they can be replaced with better ones. They can also serve to make people become aware which images are currently missing so they see an AI image and think "conceptually that image was missing but it isn't an illustration as good as it could or should be, so I'll replace it". There could be a project that seeks to replace AI images with better images made manually (or add missing illustrations) such as via asking artists to license an identified relevant image under CCBY per mail. If you'd like to I could give a long list of science-related articles in need of illustrations that is not close to being exhaustive that I posted to a Wikipedia community earlier. Human artists are also inspired by and learn from copyrighted works which they usually can't and don't all list. I'm interested in how things are and can be done in the real world in practice – if you have an idea how to get more illustrators onboard or how to better engage artists, please go ahead and if possible let me know about it since I always come across lots of articles in need of illustrations (often where a visualization/illustration would be particularly useful). Prototyperspective (talk) 17:55, 21 January 2024 (UTC)[reply]

AI-upscaling image cleanup template

Should there be an equivalent of {{

MOS:IMAGES, be replaced with their originals? Either a separate template or an option on {{AI-generated
}} that changes the message.

I'm thinking of articles I've seen like A Stranger from Somewhere where an editor has, with good but misplaced intentions, fed a lot of old film stills and 1910s publicity photos through an AI upscaler. Belbury (talk) 16:17, 16 January 2024 (UTC)[reply]

Yep, that would be a good idea for a template. There is already {{AI upscaled}} on Commons, but a tag (whether at the top of the article or inline) could be a good addition. It's better to have it be a separate template as {{AI-generated}} categorizes the article into Category:Articles containing suspected AI-generated texts, we could have an equivalent category for articles containing these images then. ChaotıċEnby(talk · contribs) 16:26, 16 January 2024 (UTC)[reply]
Template (and corresponding scaffolding) created at {{Upscaled images}}. Belbury (talk) 16:01, 23 January 2024 (UTC)[reply]

Wikimedia Commons AI

I would like to hear your opinions about my proposal for a new Wikimedia project called Wikimedia Commons AI. I'm looking forward to hearing your thoughts! S. Perquin (talk) (discover the power of thankfulness!) – 09:13, 20 January 2024 (UTC)[reply]

One issue I can think of is that of the edge cases, like human-generated images that are later enhanced by AI tools. What do you propose for these? To look at the much bigger picture, a strong categorization of human vs AI images on Commons could achieve the same results as what you suggest without the need for a redundant project, and better handle edge cases than having the whole thing divided into two different projects. We already have various kinds of media (images, sounds, videos, etc.) on Commons, why can't we deal with having both human and AI-generated media if they are explicitly distinguished as such?
Another (small) issue: I don't think you can have a domain name in .ai.org as the second-level domain appears to have already been registered. ChaotıċEnby(talk · contribs) 09:41, 20 January 2024 (UTC)[reply]

Collaborating with WikiProject Unreferenced articles

I think that this and the WP:WikiProject Unreferenced articles has a lot in common and we should collaborate with each other, because both deal with article's reliability. But I don't really know what exactly could both projects collab with... CactiStaccingCrane (talk) 14:46, 20 January 2024 (UTC)[reply]

Idk what we'd do either, but yeah, I'd support in theory. QueenofHearts 04:47, 1 February 2024 (UTC)[reply]

User:SheriffIsInTown

WP:LLM to quickly generate Wikipedia articles and even using to generate robotic rationales to nominate Wikipedia articles (i.e. Wikipedia:Articles for deletion/Sher Afzal Marwat (2nd nomination)). Please take a look on their recent articles and fix the tone or tag accordingly. 59.103.110.154 (talk) 23:01, 22 January 2024 (UTC)[reply
]

Looking at their articles, the is clearly using low-quality
WP:LLM to quickly generate Wikipedia articles claim seems false to me, they had only created six articles (although I might be missing some articles created from redirects) in January before this post, none of which look like AI. Now, the and even using to generate [sic] robotic rationales to nominate Wikipedia articles (i.e. Wikipedia:Articles for deletion/Sher Afzal Marwat (2nd nomination)) claim. The AfD you linked does read AI, but their articles do not, and either way, we can't really do anything about behavioral issues. The accused also has not nominated an AfD since, so I'd just drop it. Queen of Hearts (chatstalk • they/she) 01:47, 16 February 2024 (UTC)[reply
]

This WikiProject's bottom marquee

I spent the last 15 minutes or so trying to figure out how to boldly reintroduce the collapsible feature of the marquee that was removed in this edit in December, but I couldn't figure out a way that preserved its "look". I'm bringing this up rather than just abandoning the idea of it being (re)hidden because it seems to just be present for "fun" (i.e. unless I'm missing something it doesn't seem to serve a clear or unique purpose in the context of the WikiProject) and something about it caused some rather immediate nausea for me (maybe the way it's moving, but I usually need more like 15 to 30 minutes for that kind of motion sensitivity, not three seconds :-/). Is there any way for collapsibility to be reintroduced by someone who has more of an idea of what could be done to collapse the marquee without compromising the way it looks when unhidden (or compromising the ability to re-hide the content, as {{show}} would do)? Or no, and then my recourse is to hide it in my own user CSS? - Purplewowies (talk) 23:21, 8 February 2024 (UTC)[reply]

Sorry for that, unfortunately the collapsible feature broke the marquee on some devices. I'm thinking of ways to have it work while being able to hide it, I'll update you! (I'll remove it in the meanwhile as accessibility is more of a priority than marquees) Chaotıċ Enby (talk · contribs) 23:24, 8 February 2024 (UTC)[reply]
Wow, that was fast! I had considered just removing it myself, honestly, but that solution felt too no-fun-allowed for me to do boldly instead of asking about what to do instead. :P Thanks for the quick response, and I hope you manage to find a way for it to work! - Purplewowies (talk) 23:42, 8 February 2024 (UTC)[reply]

Reporting page?

There's a bit of a discussion on Bluesky of statements in Wikipedia being sourced to LLMs. One reader asks for "Advice on how to report AI-Generated rubbish to Wikipedia so it can be purged."

I've said to just edit it, noting that you removed a claim sourced to LLM output. But unsure not-yet-editors are perennial.

So is there anywhere that readers can report possible or likely LLM citation? - David Gerard (talk) 12:44, 24 February 2024 (UTC)[reply]

 You are invited to join the discussion at Wikipedia:Village pump (idea lab)#Have a way to prevent "hallucinated" AI-generated citations in articles, which is within the scope of this WikiProject. Chaotıċ Enby (talk · contribs) 01:35, 27 February 2024 (UTC)[reply]

Use of AI-generated news sites as sources

This is a bit of a related topic that I haven't seen many people touch on so far. There's been a rise in websites like BNN Breaking (which is on the WP spam list) that simply reword existing news articles or make up fake news entirely (as opposed to established sources like CNET that have some articles written by AI). Some cases even involve cybersquatting on domains owned by defunct news sources. Should we keep track of the use of these sources in articles (likely by good faith editors who believe the site is legitimate)?

Some articles about this phenomenon:

wizzito | say hello! 06:53, 1 March 2024 (UTC)[reply]

The consensus on
WP:RSN has been to blacklist these things as soon as they show up - but a list sounds like a good idea - David Gerard (talk) 12:20, 2 March 2024 (UTC)[reply
]

Regarding more information about it

Hello there,

I was looking through the notice board and I saw about the project, I was a bit intrested to join. Can you give a bit of introduction like what are the criteria to be a participant, what do you expect a participant to know or be good in and is there any like fixed goal to stay in the project and am I eligible. I have gone through the page lightly but was intrested if I could get some basic understanding so I can decide wether to join or not.

Thanks

Yamantakks (talk) 10:39, 15 March 2024 (UTC)[reply]

Hello! Like any WikiProject, there are no eligibility criteria for participants, you are free to participate whether or not you put your name on the list :). Cheers! Remsense 13:27, 15 March 2024 (UTC)[reply]
@Remsense,
Thanks for replying. My main question was that if I become a participant, what am I supposed to do or what is the motive of this.
I am not demotivating wikiprojects but I am rather alien to these so I am confused and asking for clarity.
Waiting for a reply
Yamantakks (talk) 08:53, 17 March 2024 (UTC)[reply]
The goal is to help spot articles that have been generated by AI without human verification, and verify if they are accurate and conform to our policies (which they very, very often don't—you'll likely see peacock words and other non-encyclopedic language sprinkled around ChatGPT-made "articles"). Chaotıċ Enby (talk · contribs) 13:30, 17 March 2024 (UTC)[reply]
@Chaotic Enby,
Ok, thank you for the information, I think I am intrested.
Yamantakks (talk) 03:25, 19 March 2024 (UTC)[reply]

Tangential but amusing case

See Talk:Ideogram and the associated pages' revision history, thanks to @Malerisch for pointing out why this page was attracting graffito after graffito. Remsense 14:19, 18 March 2024 (UTC)[reply]

"Unsupervised" AI-generated image?

Hiya! I got pointed toward this project when I asked about declaration of AI-generated media in an external group. I noticed that the article for Kemonā uses a Stable Diffusion-generated image, which has not been declared. I noticed it, as the file has previously been up for deletion-discussion on Commons, but was kept as it was "in use". If used, shouldn't AI-generated media be declared in its description / image legend? EdoAug (talk) 23:24, 10 April 2024 (UTC)[reply]

@EdoAug I don't know that there's a guideline about this in specific but I'd say so. The copyright of Stable Diffusion images is still in the courts afaik, so we might end up having to remove all of those images in the future. -- asilvering (talk) 02:50, 28 April 2024 (UTC)[reply]


Possible use of AI to engage in Wikipedia content dispute discussions

It was suggested to me that this maybe a good place to ask. A response seemed particularly hollow at Talk:Canadian_AIDS_Society so I checked on GPTZero and ZeroGPT. The first says 100% AI, and latter says about 25% likely. Quillbot says ~75% likely. So, the results vary widely based on the checker used. Is it actually likely that a certain 100% manually written contents would get tagged as 100% AI on GPTZero? Do any of human observers here feel the response in question here could be 100% human written? Graywalls (talk) 00:19, 28 April 2024 (UTC)[reply]

These detectors are really unreliable, but from looking at the linked comment (and only this comment), I'm certain that it is AI generated. 3df (talk) 02:47, 28 April 2024 (UTC)[reply]
You mean the one that starts "I appreciate your third-party perspective and the insights you provided...", right? There's almost no way an actual human wrote that. -- asilvering (talk) 02:49, 28 April 2024 (UTC)[reply]
That one came up as 100%. Then, another one of that user's response came up as 80% or so AI in GPTZero. Graywalls (talk) 09:52, 28 April 2024 (UTC)[reply]
I really recommend not caring about the detectors. A broken clock saying it's midnight isn't more convincing to me than saying it's 4:30. Remsense 16:12, 28 April 2024 (UTC)[reply]
Yeah, I recommend just eschewing the detectors entirely. Point being, "if it quacks like a duck", and all that. Remsense 03:34, 28 April 2024 (UTC)[reply]

By the way, that Canadian AIDS Society's Establishment section returns 100% AI on GPTZero as well and sure looks pretty hollow to me. Graywalls (talk) 23:14, 28 April 2024 (UTC)[reply]

There are quite a lot of citations on that section, though, so the best action here is simply to see if they verify the text. -- asilvering (talk) 23:17, 28 April 2024 (UTC)[reply]

Wikipedia policy on AI generated images

I found an article about a historical individual that contained a fully AI generated image. I mentioned this on the Teapot page and the image eventually got removed because it was original research. I tried to find some Wikipedia guideline or rule about the use of AI images but I couldn't find any. Since this WikiProject is about AI content, I came here to ask about the official Wikipedia policy on AI images, if there is any. Are AI images supposed to be removed simply because they're original research or is there something specific regarding AI images that warrants their removal? I'm looking for details regarding the use of AI images on Wikipedia and when are AI images acceptable to use. Thank you all in advance for your responses.

Broadhead Arrow (talk) 15:19, 5 May 2024 (UTC)[reply
]

Hi! You can put it on the noticeboard at Wikipedia:WikiProject AI Cleanup/AI images in non-AI contexts. I don't think there is a specific policy about images, but they are usually only vaguely accurate and/or relevant, and nearly always original research. A few, like that on Listenbourg, are kept specifically because they were used in reliable sources talking about the topic and have encyclopedic value on their own. Chaotıċ Enby (talk · contribs) 16:04, 5 May 2024 (UTC)[reply]
The most relevant links I can come up with: There was this addition to the image use policy: special:permalink/1178613191#AI-generated images, which was reverted. See also c:Commons:AI-generated media. See also this user talk discussion (some examples have survived) and the Commons deletion discussions that deleted most of the concerned images.—Alalch E. 18:24, 5 May 2024 (UTC)[reply]