Commons talk:Project scope
Add topic
WikiJournal of Humanities
[edit]In the section on excluded content, should we mention that original academic papers may be submitted to Wikiversity:WikiJournal of Humanities? Pinging @Bluerasberry for his thoughts. - Jmabel ! talk 02:41, 12 November 2025 (UTC)
- Probably not. Judging by v:WikiJournal Preprints#Articles currently in review (which includes some unacknowledged submissions as much as five or six years old), the project is largely inactive, and directing users there would not be helpful. Omphalographer (talk) 03:01, 12 November 2025 (UTC)
- @Omphalographer and Jmabel: Despite the unprocessed older submissions the project is active and considering incoming submissions.
- @Marshallsumter and OhanaUnited: can you speak more about WikiJournal? How much capacity is there to review incoming preprint submissions? Bluerasberry (talk) 03:20, 12 November 2025 (UTC)
- Regarding specifically the WikiJournal of Science, the capacity is small but close to the submission rate. For example, the latest is Diffeology and I am currently looking for reviewers. --Marshallsumter (talk) 08:07, 12 November 2025 (UTC)
should we mention that original academic papers may be submitted to Wikiversity:WikiJournal of Humanities?
No, I don't think so. Those are just very few files and this makes this page unnecessarily long and more complicated. It doesn't matter if users submit say 5 such PDF files per year to Commons or not and even not if they also submitted it to or were better to submit it to the WikiJournal of Humanities. Prototyperspective (talk) 22:40, 12 November 2025 (UTC)
Thanks for the ping. I would say it's not necessary at the current stage to explicitly state it in the Commons scope. Our current preprint processing instructs authors of original academic papers to create preprint page directly on the wiki or submit by email. OhanaUnitedTalk page 20:44, 13 November 2025 (UTC)
Certain content is excluded from Commons
[edit]This is confusing: "Certain content is excluded from Commons … Files that contain nothing educational other than raw text. Purely textual material such as plain-text versions of recipes, lists of instructions, poetry, fiction, quotations, dictionary definitions, lesson plans or classroom material, and the like are better hosted elsewhere, for example at Wikibooks, Wikiquote, Wiktionary, Wikiversity or Wikisource." We are required to host the original document at Commons to be used in the other projects, so why are we saying they must be deleted? It sounds like newspaper articles, books without illustrations, must be deleted, because they are raw text. RAN (talk) 04:13, 9 December 2025 (UTC)
- Not sure how to word it better, and it might be made clearer that there are certain exceptions (copies of legitimately published books or peer-reviewed academic papers, for example). It's basically meant to say, "No, you don't get to use Commons as a way to publish your original writing just because it is arguably educational," and "No, you don't get to write your own divergent version of a Wikipedia article and publish it here," etc. - Jmabel ! talk 04:42, 9 December 2025 (UTC)
- Source documents may be in scope; content created by Wikimedia users is generally not. The overall intent is that Commons uploads shouldn't be used to bypass the wiki editing process (e.g. writing an encyclopedia article and publishing it to Commons as a PDF), or as a back-door way means publishing content which would otherwise not be in scope on any Wikimedia project (like works of fiction). We adjusted this wording a few years ago at /Archive 2#Proposed change in wording.; if you can come up with a better way to explain the distinction, we'd be interested to hear it. Omphalographer (talk) 04:55, 9 December 2025 (UTC)
I'd like to point out a potential conflict between current COM:INUSE rules and the proposed COM:AIP guideline. Specifically, certain images that are currently 'in use' on other projects may fail to meet the criteria set forth in the COM:AIP proposal, creating a contradiction in our deletion process.
Therefore, I propose appending the following rule to the COM:INUSE section to exclude those images once that guideline is ratified.
- Images of people created by Generative AI that do not comply with the relevant guideline.
0x0a (talk) 09:51, 29 December 2025 (UTC)
- Is scope the right place or should that note be on COM:DIGNITY? GPSLeo (talk) 10:47, 29 December 2025 (UTC)
- Yes, of course, this revision is intended to patch the "INUSE", and in fact we have been patching it. 0x0a (talk) 07:49, 3 January 2026 (UTC)
Oppose COM:INUSE applies regardless of the production method. Furthermore, COM:DIGNITY does not say anything about for example neutral depictions of ancient famous people. COM:AIP does not overrule COM:INUSE. The policy section that needs a change is COM:NOTCENSORED. Prototyperspective (talk) 10:08, 4 January 2026 (UTC)
Strong support - Of course, that's the whole point and likely main purpose of COM:AIP. Regards, Grand-Duc (talk) 23:43, 6 January 2026 (UTC)
- You need to change COM:NOTCENSORED then even more clearly because then Commons makes decisions for other projects where so far one of the most important principles of Commons has been that if a file is used in a Wikimedia project, it's considered within scope and Commons users don't make editorial decisions for other projects. Prototyperspective (talk) 00:22, 7 January 2026 (UTC)
- Nope. Stating and enforcing that some kind of media doesn't fall within the hosting purview, the scope of Commons is not censorship. NOTCENSORED already has a fitting line: "However, the statement "Wikimedia Commons is not censored" is not a valid argument for keeping a file that falls outside the normal permitted Wikimedia Commons scope." AIP only clarifies that AI-generated media of real people does fall "outside the normal permitted Wikimedia Commons scope". Regards, Grand-Duc (talk) 01:10, 7 January 2026 (UTC)
- The line you cited is not about what I wrote about above. I'm not arguing files should be kept because Commons is not censored. I was saying one can't have a title "Wikimedia Commons is not censored" and then have lots of policies that have people do extensive indiscriminate deletion of swaths of objectionable content. Thus the section title needs to be changed for accuracy [in my view] so as to not be false or misleading [in my view].
Maybe we have different understandings of some concepts or terms which is not a large problem, people can sometimes disagree. I was reading the recommendable Wikipedia article about the subject so I was relating to the definition in that article's lead. Whether or not or how that policy page is changed is not the main subject here though – as said, Commons so far had a policy pillar that basically said Commons users don't get to editorialize other Wikimedia projects and that files in use in other projects are kept. This policy is important for many reasons, for example because if it's not upheld, users from other projects may stop uploading files here and instead upload them locally because they can't feel sure anymore that it will remain here. - This principle and policy is very important and whether or not it remains standing relates to COM:NOTCENSORED as that's a further step than deleting a whole type of content where the practice of users making this or that specific exception – exceptions of media types/contents, not principles like e.g. low-quality – to it, starting with the AIP one is major relevance. Prototyperspective (talk) 01:45, 7 January 2026 (UTC)
- Well, in time of a single-user login over all projects (and file renaming tools that make your account edit in lots of individual projects), the premise of a different set of Commons users who "don't get to editorialize other Wikimedia projects" is, in my opinion, flawed anyway. We're all Wikimedians with the standing authorisation to participate in any project. Regards, Grand-Duc (talk) 02:41, 7 January 2026 (UTC)
- +1. It should not be controversial to suggest that content on any Wikimedia project - including files on Commons - should meet a certain basic level of educational value and factual accuracy. AI-generated images frequently fail to meet this standard - particularly ones which appear to be a factual depiction of something, but which are actually not. Omphalographer (talk) 00:12, 23 February 2026 (UTC)
- Paintings also frequently fail to meet various standards but we still have paintings. Whether or not this or that applies often doesn't matter that much because nobody is arguing we should indiscriminately keep all of them and DRs are always an option.
particularly ones which appear to be a factual depiction of something, but which are actually not
agree ("appear to be" I think means 'claim to be or are presented as if…'). Prototyperspective (talk) 00:19, 23 February 2026 (UTC)- We do actually delete paintings and other pieces of artwork as out of scope on a regular basis, particularly when it's amateur artwork created by users. (See e.g.Commons:Deletion requests/Files uploaded by Brandymodel, Commons:Deletion requests/Files uploaded by Sravanthi kokkula, Commons:Deletion requests/Files uploaded by Jonathan Garrett, etc.) It's not the exception you think it is. Omphalographer (talk) 00:51, 23 February 2026 (UTC)
- I was illustrating that just because some type of media in your opinion often fails some standards doesn't mean it always does. From watching hundreds of art images and creating categories for user-made art etc I know well that only rarely are paintings deleted and when they are it's useless ones. But let's take another example
- .
- It should not be controversial to suggest that content on any Wikimedia project - including files on Commons - should meet a certain basic level of educational value and factual accuracy. Amateur photos frequently fail to meet this standard. [insert missing reasoning here] Additionally, artistic files which appear to be a factual depiction of something, but which are actually not are not useful. Prototyperspective (talk) 01:34, 23 February 2026 (UTC)
- We do actually delete paintings and other pieces of artwork as out of scope on a regular basis, particularly when it's amateur artwork created by users. (See e.g.Commons:Deletion requests/Files uploaded by Brandymodel, Commons:Deletion requests/Files uploaded by Sravanthi kokkula, Commons:Deletion requests/Files uploaded by Jonathan Garrett, etc.) It's not the exception you think it is. Omphalographer (talk) 00:51, 23 February 2026 (UTC)
- Paintings also frequently fail to meet various standards but we still have paintings. Whether or not this or that applies often doesn't matter that much because nobody is arguing we should indiscriminately keep all of them and DRs are always an option.
- +1. It should not be controversial to suggest that content on any Wikimedia project - including files on Commons - should meet a certain basic level of educational value and factual accuracy. AI-generated images frequently fail to meet this standard - particularly ones which appear to be a factual depiction of something, but which are actually not. Omphalographer (talk) 00:12, 23 February 2026 (UTC)
- Well, in time of a single-user login over all projects (and file renaming tools that make your account edit in lots of individual projects), the premise of a different set of Commons users who "don't get to editorialize other Wikimedia projects" is, in my opinion, flawed anyway. We're all Wikimedians with the standing authorisation to participate in any project. Regards, Grand-Duc (talk) 02:41, 7 January 2026 (UTC)
- The line you cited is not about what I wrote about above. I'm not arguing files should be kept because Commons is not censored. I was saying one can't have a title "Wikimedia Commons is not censored" and then have lots of policies that have people do extensive indiscriminate deletion of swaths of objectionable content. Thus the section title needs to be changed for accuracy [in my view] so as to not be false or misleading [in my view].
- Nope. Stating and enforcing that some kind of media doesn't fall within the hosting purview, the scope of Commons is not censorship. NOTCENSORED already has a fitting line: "However, the statement "Wikimedia Commons is not censored" is not a valid argument for keeping a file that falls outside the normal permitted Wikimedia Commons scope." AIP only clarifies that AI-generated media of real people does fall "outside the normal permitted Wikimedia Commons scope". Regards, Grand-Duc (talk) 01:10, 7 January 2026 (UTC)
- Ok explain how Mr.Besya (talk) 01:16, 23 February 2026 (UTC)
- You need to change COM:NOTCENSORED then even more clearly because then Commons makes decisions for other projects where so far one of the most important principles of Commons has been that if a file is used in a Wikimedia project, it's considered within scope and Commons users don't make editorial decisions for other projects. Prototyperspective (talk) 00:22, 7 January 2026 (UTC)
Support per Grand-Duc. It's moon (talk) 01:51, 7 January 2026 (UTC)- I think this is premature. What if we want to modify the current AIP proposal and then pass it? Won't we have to re-do this vote after that? (And conversedly, aren't we going to be biased towards the binary choice of accepting or rejecting it, hindering the possibility of a potentially better, modified proposal?) whym (talk) 10:44, 7 January 2026 (UTC)
- I find this a tricky question. I've argued in the past that there's room for Commons' users to evaluate what this guideline refers to (ambiguously) as "legitimately in use". Given the thorny issues addressed by the ai images of people guideline, maybe it's reasonable to ask whether the in-use project in question has relevant guidelines for such images at all, and consider the use legitimate if there's demonstrable consensus that such images are permissible. It's unfortunate but not unrealistic to think that some projects may willingly embrace slop, but I'd feel more comfortable if that were demonstrated first. There's an old argument about whether Commons should be in the habit of questioning any use at all, but we've seen in the past examples of inuse files being deleted for various non-copyright reasons. I don't know the right answer. — Rhododendrites talk | 21:21, 22 February 2026 (UTC)
- "some projects may willingly embrace slop" Why "slop"? A project may be open to having some AI images of identifiable people such as long-dead people and that's not something indiscriminately unreasonable. Prototyperspective (talk) 21:25, 22 February 2026 (UTC)
- Using an extreme position for the sake of argument. Some projects may embrace all (or nearly all) AI-generated content, and we can't control that. Others may have more nuanced rules. — Rhododendrites talk | 14:19, 24 February 2026 (UTC)
- Has anybody ever pulled together that information, on the stances that different projects take on AI content? Outside of enwiki it's never been clear to me whether INUSE cases of AI images are a sign that the wiki endorses them, or just hasn't noticed them yet. Belbury (talk) 18:50, 28 February 2026 (UTC)
- Inclusion of an AI image does not imply or require the wiki to endorse it – it may mean that they don't reject them all on that basis. One can let relevant editors of the project know about the image, which is especially reasonable when it's an article with few views/watchers. This can be done by pinging article authors and/or making a talk page post or by asking about it at a discussion site of that platform. Commons so far hasn't really interfered with editorial decisions of other projects and I don't think doing so without at least sufficiently involving relevant participants of these projects in an inclusive manner is a good road to take. Prototyperspective (talk) 18:58, 28 February 2026 (UTC)
- It might be good to have a table somewhere with an overview for all projects, like we have a table for FoP regulations for all countries. I know that ruwiki also doesn't accept them: there's no explicit guideline but I recently asked about AI images on the ruwiki equivalent of the Village Pump, which started a long discussion with a pretty clear consensus against AI images (with maybe only 1-2 outliers who saw some potential use cases, but everyone else disagreed with them even on those use cases). Nakonana (talk) 10:44, 1 March 2026 (UTC)
- Has anybody ever pulled together that information, on the stances that different projects take on AI content? Outside of enwiki it's never been clear to me whether INUSE cases of AI images are a sign that the wiki endorses them, or just hasn't noticed them yet. Belbury (talk) 18:50, 28 February 2026 (UTC)
- Using an extreme position for the sake of argument. Some projects may embrace all (or nearly all) AI-generated content, and we can't control that. Others may have more nuanced rules. — Rhododendrites talk | 14:19, 24 February 2026 (UTC)
- "some projects may willingly embrace slop" Why "slop"? A project may be open to having some AI images of identifiable people such as long-dead people and that's not something indiscriminately unreasonable. Prototyperspective (talk) 21:25, 22 February 2026 (UTC)
Support per proposal. Redmin (talk) 12:43, 23 February 2026 (UTC)- This proposal targets AI slop, but is missing the core issue. Granted, the core issue was hardly ever a problem before AI slop because nobody was fooled by a painting and convincing CGI was time consuming.
If any project wants to create/use a painting, cartoon or w:cosplay depiction of an ancient famous person and the media is clearly identifiable as such without reading the description, meh. That's their business. May work if done tastefully, or for a children's history book or something. The infobox image for w:Cleopatra is a sculpture. That's fine. And if some project would wish to use AI-generated photorealistic Cleopatra in front of a server rack, meh. It's obviously not real.
Plausible photorealistic hallucinations on the other hand? No thank you. Those don't just fail COM:EDUCATIONAL, they actively corrupt knowledge. This is true regardless of whether a person is depicted or not. Imagine an AI-generated image of the w:Kallanai Dam as it looked right after it was built about 1900 years ago, you'd face the same problems.@0x0a, would you mind creating a proposal that targets all media, AI-generated or not, regardless of what it depicts, that is misleading and actively corrupting knowledge?They m:vanished.. - Alexis Jazz ping plz 15:12, 25 February 2026 (UTC)May work if done tastefully, or for a children's history book or something. The infobox image for w:Cleopatra is a sculpture. That's fine. And if some project would wish to use AI-generated photorealistic Cleopatra in front of a server rack, meh. It's obviously not real.
Exactly.Plausible photorealistic hallucinations on the other hand?
Hallucinations are obviously not useful. It gets more useful when carefully prompted to look exactly like on wanted to, especially if e.g. forensic studies or lots of sculptures from the time are available to use for that. A Plausible photo-realistic image of Cleopatra is obviously not real since there were no photo cameras back then. The production method could also be in the file title and caption. Please read COM:EDUCATIONAL and see educational documentaries and podcasts that already show nonreal imagery of ancient people to better understand that this is not just a realistic educational use-case but an already-real/realized educational use-case. Prototyperspective (talk) 13:15, 26 February 2026 (UTC)- Prototyperspective, there are various sculptures of Cleopatra, and big AI has no doubt trained on them. When I asked ChatGPT, it gave me two versions of Cleopatra and asked me to select the best one. In the other one, her head was closer to a sphere.
It gets more useful when carefully prompted to look exactly like on wanted to
When I asked Google's banana for photorealistic Cleopatra, it drew a Fortnite character. When I explained that's not photorealism, it added some details and shadows, making it look like a Fortnite character but with "RTX ON". Hallucinating is what AI does. Any detail you didn't describe is hallucinated. If you can describe it in sufficient detail to get almost-acceptable output, you can probably draw her yourself. Use an AI-generated sketch (or a sculpture) as an outline if you're having trouble with perspective.The production method could also be in the file title and caption.
These are frequently lost when files are re-used. - Alexis Jazz ping plz 01:48, 27 February 2026 (UTC)- Yes, describing in detail and/or using a sketch and/or using an input image(s) is needed to get a good quality output; never said anything else.
- .
you can probably draw her yourself
1. false 2. speculation 3. it's not about whether one could but whether people a) did and b) licensed it in a compatible way. But mostly importantly it's false and irrelevant.These are frequently lost when files are re-used.
so people think the image is a photo when it's a thousands years old person? Are there any other media on Commons that may get re-used in ways you don't like, say video of sexual intercourse, murder, anime, and other content that is available here more plentiful than the few educationally valuable images of a type that's already used in educational podcasts and documentaries? With educational innocuous media censored, there is no way in 30 years we'll still be relatively free of censorship; COM:NOTCENSORED is already written inaccurately now. Prototyperspective (talk) 11:39, 27 February 2026 (UTC)- Prototyperspective,
false
When you are forced to describe something in excruciating detail, drawing it yourself may really be easier. Even if you suck at drawing. At least humans understand context.so people think the image is a photo when it's a thousands years old person?
Who knows what people think when provenance is lost? What if it's w:Elephant man? Do you expect teenagers to know when color photography was invented?Are there any other media on Commons that may get re-used in ways you don't like, say video of sexual intercourse, murder, anime
This has nothing to do with what I like. If educational media of sexual intercourse gets distributed on Pornhub it doesn't suddenly start to corrupt knowledge. If media of a murder is posted on social media for clout and clicks it doesn't corrupt knowledge.
In such cases, corrupting knowledge requires malice. Adding false captions or context. Merely losing context doesn't corrupt knowledge. With AI slop, knowledge can easily be corrupted and no malice is required. Simply losing the context is enough. We delete an image, for whatever reason. A copy survives on Pinterest. No caption, no filename, no templates. Just the image. It gets copied to Fandom. Some local media or blog publishes it, and we copy it from them. Laundered. - Alexis Jazz ping plz 00:20, 1 March 2026 (UTC)may really be easier
false in many or most casesexcruciating detail
if you don't know much about prompting I recommend not being being very involved in enforcing your views onto the world. This is not how prompting works and it's not "excruciating" and in any case easier to do.At least humans understand context.
same for this. If you don't understand how sth works and is used, then don't act like everybody has to be ruled by your restrictive rules please. Understand it first and then bring nuanced informed suggestions to a debate. Humans use AIs. Humans understand context. Hopefully this is clear enough.Do you expect teenagers to know when color photography was invented?
this is absurd; they know it was not there thousands of years ago or 300 years ago.If educational media of sexual intercourse gets distributed on Pornhub it doesn't suddenly start to corrupt knowledge. If media of a murder is posted on social media for clout and clicks it doesn't corrupt knowledge.
I was talking exclusively about files on Commons.Simply losing the context is enough
people put the info into file titles and file descriptions and file captions where it's used; if they don't then that's either a violation of policy or could be required.With AI slop, knowledge can easily be corrupted and no malice is required.
With overly undifferentiated and heavy-handed knee-jerk reactions to a novel kind of media production method, people are corrupting free knowledge by dismantling core principles and rejecting a novel type of production that gets increasingly used throughout society, particularly of people and organizations that are not of the top 1% of privilege and use budgets efficiently so the free knowledge ecosystems doesn't get any of the large benefits and only experiences the downsides, further entrenching echo chambers and confirmation bias.Some local media or blog publishes it, and we copy it from them. Laundered.
Not sure what you're talking about. For example, let's talk about works prompt engineered by the uploaders. Also the context is not lost and the such copied files are very rare and can still be deleted on the grounds of not being useful etc. Prototyperspective (talk) 12:22, 1 March 2026 (UTC)
- Prototyperspective,
- Prototyperspective, there are various sculptures of Cleopatra, and big AI has no doubt trained on them. When I asked ChatGPT, it gave me two versions of Cleopatra and asked me to select the best one. In the other one, her head was closer to a sphere.
Support per Grand-Duc. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 14:07, 26 February 2026 (UTC)- Is there an example of an existing file that would be deleted due to this, or an already deleted file similarly? What does a borderline case that would not be deleted look like? Those examples will help us discuss more concretely. It appears that 0x0a (the proposer) vanished. Anyone else? whym (talk) 01:03, 28 February 2026 (UTC)
- One example would be File:Vladimir Putin with monkey (1173814355247235082).png. This image is currently in use on Wikibooks, but clearly violates COM:AIIP and is probably a COM:DIGNITY violation as well. Omphalographer (talk) 01:27, 28 February 2026 (UTC)
- We don't consider the dignity of the monkey. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 14:50, 28 February 2026 (UTC)
- One example would be File:Vladimir Putin with monkey (1173814355247235082).png. This image is currently in use on Wikibooks, but clearly violates COM:AIIP and is probably a COM:DIGNITY violation as well. Omphalographer (talk) 01:27, 28 February 2026 (UTC)
Support per proposal. --ReneeWrites (talk) 15:33, 1 March 2026 (UTC)
Just a note that it looks like we had two opposite DRs closed today, relevant to this discussion. The Squirrel Conspiracy closed Commons:Deletion_requests/Files_in_Category:AI_artwork_of_historical_figures_by_Netha_Hussain as AIP explicitly overrules INUSE
and Abzeronow closed Commons:Deletion_requests/Files_on_AI_art_caricatures_and_public_characters_in_AI_art as AIP doesn't currently override the IN USE policy
. :) This is not a challenge to either one, but a suggestion that perhaps they should be reopened until this discussion plays out. — Rhododendrites talk | 15:18, 1 March 2026 (UTC)
- Oh, so now these deletion requests are already closed; I intended to point them out here as current examples for why this needs clarification. Note, Commons:Deletion requests/Files in Category:AI artwork of historical figures by Netha Hussain are two DRs, the original one filed by me in April 2025, when I excluded any file that was in use then in order to honor COM:INUSE, and as we didn't have the AIP guideline back then. And the recent one by Dronebogus who nominated the remaining files in the category "AI artwork of historical figures by Netha Hussain" although most of them were still in use at the point of nomination. An example of a file that was still quite widely in use before deletion was File:Alan Turing in watercolour.png, in Spanish Wikipedia and in other language versions specifically as an example of AI-generated art, in appropriate articles. Although I'm very skeptical regarding AI-generated art and would consider most of it slop that should be deleted, I never had anything against using a small selection of such files for purposes such as articles about AI. The only argument against hosting the Turing image in this case is AIP, for it was in legitimate encyclopedic use otherwise. - My stance on the proposal is, I think,
Neutral, as I see a strong argument from both sides. On the one hand, I don't want heaps of AI-generated images of real people here, and would also frown upon projects that use them liberally in contexts that have nothing to do with AI. On the other hand, I think that the INUSE policy is very important and we basically should never overrule other projects, also from a practical point of view: If we start overruling other projects on a larger scale due to Commons-local policies such as AIP, they will start hosting more and more images locally in their projects, which defies one important purpose of Commons and its original goal, to serve as a common media repository for Wikimedia projects. Gestumblindi (talk) 16:38, 1 March 2026 (UTC) - I completely agree, there has to be some consistency on how policy is enforced. It's moon (talk) 16:38, 1 March 2026 (UTC)
Support per proposal. If other projects don’t like it they can host this AI slop locally. In fact, why not just make this a speedy deletion rationale? --Dronebogus (talk) 20:17, 1 March 2026 (UTC)
- Some projects do not host files locally. m:List of Wikipedias having zero local media files, this would essentially impose that on them. Abzeronow (talk) 04:30, 2 March 2026 (UTC)
- Yes, I can see that there is a conflict. It doesn't help that some of the same files were nominated in both DRs so both TSC and I could also only see half of the discussion around the affected files (consensus in mine was to keep, and Dronebogus's nomination felt like a test case to me). TSC's opinion on this matter may prevail via this current discussion, and I personally don't disagree much with TSC on AI. But COM:INUSE is a important policy, and behind it is a principle that Commons supports other projects, we don't dictate. Only when legal matters such as copyright and country-specific consent laws on photography do we use the power to delete. I can see a use case for a guideline or policy in which photorealistic depictions of real people should be treated the same as photography. I ruled the way I did because I followed the consensus and felt on balance that COM:INUSE at this moment in time outweighs COM:AIP as we are still figuring out the exact contours of the guideline. But I will follow our community's wishes, and if the community says that AIP can overrule INUSE, then that is what I shall follow. Abzeronow (talk) 04:10, 2 March 2026 (UTC)
- To add a third comment-not-a-vote, while I think the AIP carve-out would need to be added to this page at some point, it's worth resolving the open questions at AIP first. For example, using Glamorous to see which AI-generated images of people are actually in use anywhere, I see a lot of historical figures. That was one of the areas of the guideline proposal that some folks wanted to exempt from the guideline. There wasn't enough discussion of it there to justify delaying promotion to guideline status as-is, but it is something worth resolving one way or the other. In other words, anyone who wants to exempt long-dead figures may want to get that proposal underway before this concludes. — Rhododendrites talk | 15:25, 2 March 2026 (UTC)