- Meta’s new AI-generated photo labels are misidentifying real photos, causing frustration among photographers.
- The labeling confusion arises from minor edits using standard photo tools being mistaken for AI-generated content.
- Meta is reassessing its labeling process to improve accuracy and better reflect the actual use of AI in photos.
Meta’s new policy of labeling AI-generated photos has sparked controversy as many real photos are being incorrectly tagged.
Since February, Meta has implemented labels on Facebook, Instagram, and Threads to identify photos made with AI tools.
However, the system is mislabeling many genuine photos, leading to widespread frustration among photographers.
For instance, a photo of the Kolkata Knight Riders celebrating their IPL win was wrongly tagged as “Made with AI” on Instagram.
This issue only appears on mobile apps, not on the web version. Numerous photographers have voiced their concerns about this, arguing that minor edits using standard photo editing tools should not be labeled as AI-generated.
Prominent photographer Pete Souza, a former White House photographer, shared his frustration after one of his photos was mislabeled.
According to Souza, simple actions like cropping and flattening an image in Adobe software seem to trigger Meta’s algorithm to mark them as AI-created. Despite his efforts to remove the label, it remained attached to his photo.
Meta has not provided clear answers regarding these incidents. However, the company has stated that they are reassessing their labeling process to better reflect the actual use of AI in images.
A Meta spokesperson emphasized that their goal is to help users identify AI-generated content accurately and that they are taking feedback seriously to improve the system.
The confusion arises from Meta’s reliance on image metadata to apply these labels. The company uses metadata standards like C2PA and IPTC to identify AI-generated content from tools provided by major companies such as Google, Adobe, and Microsoft.
According to reports, even minor uses of AI tools, such as Adobe’s Generative AI Fill to remove objects, can lead to photos being mislabeled.
While Meta hasn’t clarified the exact criteria for applying the “Made with AI” label, some photographers support the initiative, arguing that any AI involvement should be disclosed.
Meta is actively collaborating with other companies to refine their labeling process to ensure it aligns with their intent.
Currently, Meta does not differentiate between photos slightly edited with AI tools and those entirely created by AI. This can confuse users trying to understand the extent of AI usage in a photo.
The label on Meta’s platforms vaguely states, “Generative AI may have been used to create or edit content in this post,” which only becomes clear when tapped on.
Despite these efforts, many photos that are clearly AI-generated remain unlabeled on Meta’s platforms. With the upcoming U.S. elections, the accuracy of handling AI-generated content is more critical than ever for social media companies.