Google is adding new disclosures to its AI photos, but they’re still not clear at first glance

[ad_1]

Starting next week, Google Photos will add a new disclosure when a photo is edited using an AI feature, such as Magic Editor, Magic Eraser, and Zoom Enhance. When you click on a photo in Google Photos, there will now be a disclosure when you scroll to the bottom of the Details section, indicating when the photo was “edited” with Google AI.

Google says it’s making this detection to “further improve transparency,” however, it’s still unclear when an image is edited by AI. There are still watermarks visible within the photo frame indicating that the photo was created by artificial intelligence. If someone sees a photo edited by Google’s AI on social media, in a text message, or even while scrolling through their Photos app, they won’t immediately see that the photo is fake.

New revelations about artificial intelligence from Google (Image credit: Google)

Google announced the new detection of artificial intelligence images in… Blog post On Thursday, just over two months after Google unveiled the new Pixel 9 phones, packed with AI-powered photo editing features. These disclosures appear to be a response to the backlash Google has received for widely distributing these AI tools without any visible watermarks that can be easily read by humans.

As for Best Take and Add Me — Google’s other new photo editing features that don’t use generative AI — Google Photos will now also indicate that those photos have been edited in its metadata, but not under the Details tab. These features edit multiple images together to appear as one clean image.

These new tags don’t exactly solve the main problem people have with Google’s AI editing features: The lack of visible watermarks in the photo frame (at least the ones you can see at a glance) might help people not feel cheated, but Google doesn’t He owns them.

Every photo edited by Google AI already reveals that it was edited by AI in the image’s metadata. Now, there’s also an easy-to-find reveal under the Details tab in Google Photos. But the problem is that most people don’t look at the metadata or details tab of the photos they see online. They just look and move away, without further investigation.

To be fair, watermarks visible within the AI ​​image frame aren’t an ideal solution either. People can easily cut or edit these watermarks, and then we’re back to square one. We reached out to Google to ask if they’re doing anything to help people instantly identify if a photo has been edited by Google AI, but we didn’t immediately hear back.

The proliferation of Google’s AI-powered photo tools could increase the amount of artificial content people see online, making it difficult to distinguish what’s real and what’s fake. Google’s approach, using metadata watermarks, relies on platforms to indicate to users that they are viewing AI-generated content. Meta is already doing this on Facebook and Instagram, and Google says it plans to tag AI images in search later this year. But other platforms have been slower to catch up.

[ad_2]

Leave a Comment