×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
Is it real or is it AI? For photographers, it’s nebulous

Is it real or is it AI? For photographers, it’s nebulous

Meta and others are researching methods sophisticated enough to detect when AI has been used by just analyzing the image itself.

Follow Us :

Last Updated : 08 June 2024, 05:20 IST
Comments

By Dave Lee

On the internet, some questions are easy to answer. “Is it cake?” — slice through it with a knife. “Will it blend?” — stick it in a machine and find out.

But: “Is this AI?” That’s a harder one.

You might not think so. Clearly, something generated using a tool such as Midjourney or OpenAI’s DALL-E should be described as “Made with AI.” In these cases, the only human effort required is dreaming up a text prompt.

But here’s a more nuanced example I’ve been thinking about. After setting off at 3:30 am one recent day, veteran photographer Matt Suess got into position, ahead of the tourist crowd, to capture the sun rising over Utah’s Canyonlands National Park. In post-production, he blended several frames to achieve the ideal level of exposure. Then, he used Adobe Photoshop’s “generative fill” function to fix a small but unseemly dust spot.

The end result was gorgeous, as his followers attested. But soon after posting on both Instagram and Threads, the image received an automated “Made with AI” label because of his use of generative fill. Suess called it frustrating. “I think that gives the casual user, a regular person, the impression the whole thing was a prompt,” he told me.

Meta Platforms Inc. put its AI labeling policy in place after its oversight board advised that users should be better informed of possibly manipulated content, even if the goal wasn’t necessarily to deceive. Meta consulted “over 120 stakeholders in 34 countries in every major region of the world,” wrote Meta’s head of content policy, Monica Bickert. The system is primitive; it relies on self-reporting or metadata attached by photo-editing software when AI is used.

Now that the system has been rolled out, some in the professional photography business feel it is heavy-handed. “The fact that Instagram, arguably photography’s most important platform, is weakening photographers’ authenticity by attaching AI tags willy-nilly is insulting and outrageous,” wrote Matt Growcoot for PetaPixel, a leading independent photography news site.

But you might be thinking: Suess did use AI to alter his image, so the label is fair game. If Suess doesn’t want the label, he shouldn’t use AI. But then, why not have a label for any kind of editing in Photoshop? Techniques to enhance, improve or otherwise fix photographs have been used for almost two centuries. It’s widely accepted that stylistic tweaks are appropriate, with the exception of most photojournalism. With AI, a fresh conversation emerges on where the line should be drawn. Suess feels the blunt label — “Made with AI” — was aggressive and misleading. I agree with him. After all, no AI yet offers Suess the capability of hiking through a national park at the crack of dawn on his behalf.

Meta is fine with its heavy hand — for now. A blog post from head of global affairs Nick Clegg earlier this year pointed out that the company’s current approach would be in place “through the next year, during which a number of important elections are taking place around the world.” In other words, over labelling and upsetting photographers is the much lesser of two evils — the other being the risk of allowing fake images to influence elections worldwide. The mistakes made in the run-up to the 2016 US Presidential election still hurt the company to this day.

In 2025, Meta can be expected to revisit and make changes to its policy, though users shouldn’t get their hopes up for a solution that pleases everyone or even works. Today’s detection methods are trivially easy to circumvent by those looking to deceive, though there is still value in providing a better way for good actors to be transparent with their audience. One solution to Suess’ complaint would be a sliding scale of disclosure, whereby some AI edits (such as removing a minor flaw) are added to metadata for those who care to investigate but do not trigger the “Made with AI” badge of shame.

A better way, one that doesn’t rely on the honesty of people on the internet (ha!), is hopefully not too far away. Meta and others are researching methods sophisticated enough to detect when AI has been used by just analyzing the image itself. Such a system could detect whether something is fully generated by AI or only partially altered, like Suess’ sunrise.

Meta is “working hard” on that approach, Clegg wrote, though it’s a cat-and-mouse game. As AI becomes smarter at creating images, detecting when AI has been used will become even more difficult. Answering the fast-evolving ethical questions won’t be much easier, either.

ADVERTISEMENT

Follow us on :

Follow Us

ADVERTISEMENT
ADVERTISEMENT