Jump to content

Wikipedia talk:Image use policy

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

 You are invited to join the discussion at WT:CSD § F8 and keep local. -- Marchjuly (talk) 03:02, 12 July 2024 (UTC)[reply]

Privacy protection

[edit]

Hi I have question. Is there any policy that protect a person's photo privacy (found in the article/page)? I.e. Elizabeth is a former actress that worked within adult industry. On her article, the infobox has an image showing cleavage. Elizabeth is however no longer works within the adult industry and regret doing so, and find the image inappropriate, considering many users everyday is reading her wiki page and she believe it's a breach of her privacy. She wants to replace with another picture or without any picture. Is there any Wikipedia policy/guidelines that allows to replace the image and protects her rights? 2A00:23EE:10B0:7735:A55E:46DC:804A:949F (talk) 21:26, 24 July 2024 (UTC)[reply]

WP:Not censored; WP:Outing would be relevant. Wikipedia is timeless, so if something is of encyclopedic interest then it remains so. But if a better image is available then that could be the lead image. Images can only be used if appropriate permission was supplied, and that means that the right to "privacy" is forgone. Some rights may be retained by the law, eg right not to be used in an advertisement. Graeme Bartlett (talk) 07:45, 25 July 2024 (UTC)[reply]

Donald J. Harris

[edit]

Donald J. Harris is the father of Kamala Harris. Pictures of him seem hard to find. There is one proposed in this discussion] on his article's talk page. It is from the Stanford Economics Department. I'm guessing Stanford has the copyright. How does one verify this? If it is copyrighted, what are our options? How long can fair use be employed? Any help with these questions or with finding a public domain image of Donald Harris will be greatly appreciated. Fowler&fowler«Talk» 02:19, 2 August 2024 (UTC)[reply]

The page on which that image appears has a Stanford University copyright notice, so lacking any other information we assume that to apply to the image - you could certainly ask them directly to verify, or check whether the image has been published elsewhere with other details. Generally speaking, fair use wouldn't be possible for an image of a living person since (in theory) a free image could be created at any time. (Given that his daughter is a US federal officeholder, possibly he appeared at an event where he might have been photographed by a federal employee?) Nikkimaria (talk) 02:33, 2 August 2024 (UTC)[reply]
Thanks for this great reply Nikkimaria. Sadly, I don't think father and daughter are close. They might even be estranged. He seems to have a low public profile, even though he was a Stanford University professor. But maybe if she triumphs in November, they'll make up and the possibilities thereafter will skyrocket. Thanks again. Fowler&fowler«Talk» 02:54, 2 August 2024 (UTC)[reply]
PS I take back some of what I said above. Donald J. Harris has written a warm family memoir here, which has more recent family pictures, but sadly all copyrighted. Fowler&fowler«Talk» 03:17, 2 August 2024 (UTC)[reply]
Hello again, @Nikkimaria: The Lawrence Berkeley National Lab is a Federal lab. Is the picture in this newsletter public domain? Thanks. Fowler&fowler«Talk» 17:06, 6 August 2024 (UTC)[reply]
Not KH's father, but a better picture of her mother. Fowler&fowler«Talk» 17:07, 6 August 2024 (UTC)[reply]
The lab is federally funded but is not a federal agency. The photo is copyrighted and the credit specifically indicates a copyright held by the university regents. Whpq (talk) 18:29, 6 August 2024 (UTC)[reply]
Sometimes, if you contact the person or employer, they will provide a suitably licensed photo. WhatamIdoing (talk) 18:36, 6 August 2024 (UTC)[reply]

AI-generated images of people

[edit]

I came across this 2023 archived discussion about AI-generated images, but it doesn't appear to have reached a clear-cut conclusion, and it also appears to have been mainly focusing on the copyright status of such works. There are encyclopedic concerns as well, particularly when it comes to AI images of people. I occasionally come across what seem to be AI generated images of people while looking at Special:NewFiles: some recent examples are File:Bafaki Tangal.jpg, File:P. M. S. A. Pukkoya Tangal.jpg, File:Km sahib.jpeg, File:Sayyid ummar bafaqi.jpeg and File:K. Uppi Saheb 1.jpg. To be honest, I don't know for sure whether these are AI images, but they don't seem like paintings, drawings or photographs. From a copyright standpoint, these could be problems per c:COM:BASEDONPHOTO if they were created based on a old photo or something; however, even if they're 100% original, they might be too original per WP:IMAGEOR. The question I have is whether encyclopedically such images are OK to use even if their copyright is not a problem. FWIW, there is a little on AI images in WP:AI#Images and Commons and much more in c:COM:AI, but these too seem more focused on copyright related issues than encyclopedic use; of course, copyright is what Commons is more concerned with, which is why it might be a good idea for encyclopedic concerns covered a bit more locally here on Wikipedia. The two images I recently came across are of deceased persons and its possible non-free images could be used per WP:NFCCP; if, however, that could be affected if freely licensed AI-generated images are considered to be a reasonable free alternative to non-free images. Furthermore, freely licensed AI-generated images could possibly be argued to even be OK to use in BLPs, but that might cause issues with WP:BLPIMAGE. Pinging Masem and SMcCandlish since they participated in the archived discussion mentioned above, but feedback from others would be appreciated too. -- Marchjuly (talk) 01:24, 8 October 2024 (UTC)[reply]

I think it's trivially obvious that no AI-generated image has any encyclopedic value whatsoever. This should be enshrined in policy. Encyclopedic value is a function of human discernment in collation, preparation, and representation. AI is fundamentally incapable of such discernment; as AI is functionally a black box, humans purporting to mediate its output are also incapable of such discernment. Remsense ‥  01:25, 8 October 2024 (UTC)[reply]
I know the scope of this post is intentionally narrower, pertaining to images of people. However, I do not see a distinction worth making here. Remsense ‥  01:43, 8 October 2024 (UTC)[reply]
I could see some value in using AI generated images in articles about AI generated images or maybe in articles about art, but I'm not so sure there's much value in using them in biographies, except perhaps as an example of someone's art or perhaps in cases where the image is controversial and the subject of critical commentary in reliable sources. Given that so many biographies (not only BLPs) seem to be without at least an image for primary identification purposes, the temptation to create one using AI could be too much for some to resist. Moreover, some non-free images might be of poor quality (File:K. Uppi Saheb 1.jpg was actually overwriting File:K. Uppi Saheb.jpg and needed to be split, and File:Bafaki Tangal.jpg is another overwritten file in need of a split.) that it's tempting to replace them with "better" looking AI images. The questions is whether such a thing is good from an encyclopedic standpoint. If the consensus is that it's not, then I agree such a thing should be clearly stated in relevant policy pages. -- Marchjuly (talk) 01:47, 8 October 2024 (UTC)[reply]
If we have free non-generated images we should use those. If we do not, then works derived from non-free images, whether created by AI or by a human artist, are likely problematic with respect to the copyright of the images they were derived from. —David Eppstein (talk) 01:54, 8 October 2024 (UTC)[reply]
Yes, I wasn't sure how to articulate this while coming off with adequate clarity as to my position, but it's clearly reasonable to use AI-generated images as primary illustrations of the generation itself. Remsense ‥  01:58, 8 October 2024 (UTC)[reply]
Zero allowance for AI generated images that are meant to depict people, living or dead. Within the context where AI images would be allowed and where it is needed to show a human or more with the image, it should be clearly generic human figures that AI is known to generate, and if the image edges on recognizability, an alternative image should be saught. — Masem (t) 02:16, 8 October 2024 (UTC)[reply]
@Masem: Does it already say that somewhere in IUP or some other policy page? Is there fairly accurate way of determining whether an image is AI-generated? I'm a total newbie when it comes to them and only notice when the image seems unnatural for some reason. I'd imagine that some are quite skilled at creating such images so that they can be really hard to detect, unlike the ones I mentioned above (which really seemed odd to me). -- Marchjuly (talk) 02:25, 8 October 2024 (UTC)[reply]
Sorry, I should have said this is my opinion , if we don't have advice anywhere on this. — Masem (t) 02:30, 8 October 2024 (UTC)[reply]
I think this is an argument for expecting the sourcing and processing to be clearly stated on the file page for media used in articles. I've seen people point to the frequent absence of this necessary documentation on Commons as a pragmatic argument that we can't truly expect WP:V to always apply. I personally won't apologize if I'm doing GAN or peer review for an article where I have to insist on removing media with incomplete or unclear documentation. Remsense ‥  02:32, 8 October 2024 (UTC)[reply]
Yeah, I think if we are allowing AI images (and in such cases, starting with how Commons does it), it should be where the uploader has otherwise been in control of the image generation route so we know what the prompts were, what AI engine was used, etc.
But again, that's for sufficiently generic images. When it comes to any AI image that tries to produce images of known persons, that should be an area we avoid with a ten-foot pole due to the potential issues with accuracy, representation, etc. Masem (t) 03:04, 8 October 2024 (UTC)[reply]
I find myself in concurrence with all of the concerns raised above about use of AI-generated (or significantly AI-altered) images of people, outside the context of encyclopedic coverage of what AI imagery is and what controversies surround it. If we're using AI fakery to represent biographcial subjects, then we are making a mistake. As for identifying them, there are sites now that analyze images and can identify AI-generated ones with a 95%+ accuracy rate, so I'm told, though this is not something I have looked at closely.  — SMcCandlish ¢ 😼  05:53, 8 October 2024 (UTC)[reply]
Anyone uploading an image like that needs one warning and then an indefinite block. At the moment, the couple of samples I've seen are hideous (example from above). However, even if that problem were overcome, the idea that faked photos could be used because someone thinks they are ok shows a clear WP:CIR problem. Johnuniq (talk) 06:13, 8 October 2024 (UTC)[reply]
That's what makes me confident that expecting clear documentation for images used in articles is adequate to address this problem on the article front at least: or, at least addressing it as well as we address copyvio in prose. Editors who do this are almost always inexperienced or incompetent, and scandals where a regular contributor in good standing is found to be fabricating this documentation are likely to be exceedingly rare.. Remsense ‥  06:18, 8 October 2024 (UTC)[reply]
FWIW, I didn't start this discussion in the hope that someone would drop the hammer on one particular uploader, but rather to see whether this kind of thing had been previously discussed before. I'm assuming the uploader of the files I used as reference is just unfamiliar with such things and was acting in good faith. They were overwriting existing files with some of their uploads which is also a problem, but again I think this was just a new user not familiar with such stuff. I used their uploads for reference because they're ones I recently came across in Special:NewFiles, but I've seen such images before and always wondered about them. Such images looked odd to me but I wasn't sure how to tell whether they were AI generated. I imagine these types of images are going to become more and more common on Wikipedia since they seem to have become more and more common out in the real world. So, if there's no existing policy like IUP or BLP that specifically deals with them, then perhaps it would be a good idea to discuss one. If a consensus is established and there's a way to technically detect them, then existing images could be taken care of and perhaps steps could be implemented to prevent/discourage future uploads. -- Marchjuly (talk) 07:38, 8 October 2024 (UTC)[reply]
I think BLP at least in spirit covers it. Fundamentally, an AI-generated image misrepresents a real, living person, showing them in a way they never actually looked like. If it's not acceptable to make misrepresentations about living people in text, it certainly should not be acceptable to do it in images either. Seraphimblade Talk to me 11:09, 8 October 2024 (UTC)[reply]