Photoshop’s answer to the Dall-E hints at the future of photo editing

these years Adobe Max 2022 They got big on 3D design and mixed reality headsets, but the AI-generated elephant in the room was the emergence of text-to-image generators like Dall-E. How does Adobe plan to respond to these revolutionary tools? Slowly and cautiously, according to the main speech – but an important feature buried in the new version of Photo shop It shows that the process has already started.

near the end Release Notes (Opens in a new tab) For the latest version of Photoshop v24.0, it is an experimental feature called “Neural Background Filter”. What does this do? Like Dall-E and Midjourney, it lets you “create a unique wallpaper based on the description”. Simply write in the background, select “Create” and choose your preferred result.

This is far from being an Adobe Dall-E competitor. It’s only available in Photoshop Beta, which is a separate test bed from the main app, and you’re currently limited to typing in color to produce different image backgrounds, rather than weird configurations from the darkest corners of your imagination.

But the ‘neural background filter’ is clear evidence that Adobe, while cautious, is dipping its toes further into AI image generation. And her keynote at Adobe Max shows that he believes this frictionless way of creating visuals is undoubtedly the future of Photoshop and Lightroom — once the small issue of copyright and ethical standards issues is resolved.

creative pilots

Adobe didn’t really mention the arrival of a “neural background filter” in Adobe Max 2022, but it did specify where the technology will eventually end.

David Wadhwani, Adobe’s head of digital media, said the company has the same technology as Dall-E, Stable Diffusion, and Midjourney; It has just opted not to implement it in its applications yet. “Over the past few years, we have invested more and more in Adobe Sensei, our artificial intelligence engine. I would love to refer to Sensei as your creative assistant,” Wadhwani said.

“We are working on new capabilities that can take our major flagship applications to whole new levels. Imagine being able to ask your creative assistant in Photoshop to add an object to the scene simply by describing what you want, or ask your fellow pilot to give you an alternative idea based on what you’ve already built It’s like magic.” Definitely go a few steps further than Sky replacement tool in Photoshop.

(Image credit: Adobe)

He said this while standing in front of a fake of what Photoshop with Dall-E powers (above) would look like. The message was clear – Adobe can create text-to-image conversion at this scale at this time, but was chosen not to.

But it was Wadhwani’s Lightroom example that showed how this type of technology can be more logically integrated into Adobe’s creative applications.

“Imagine if you could combine ‘gen-tech’ and Lightroom. So you could ask Sensei to turn night into day, a sunny photo into a beautiful sunset. Move shadows or change the weather. All this is possible today with the latest advances in generative technology.” , he explained, in an inaccurate reference to Adobe’s new competitors.

So why hold back while others are stealing AI-generated french fries? The official reason, and it certainly has some merit, is that Adobe has a responsibility to make sure that this new power is not used recklessly.

“For those unfamiliar with generative AI, it can simply conjure up an image from a text description,” Wadwani explained. “We’re really excited about what this can do for all of you but we also want to do it carefully.” . “We want to do it in a way that protects the forces and supports the needs of creators.”

What does this mean in practice? Although it’s still a bit vague, Adobe will be moving slower and more carefully than the likes of Dall-E. “This is our commitment to you,” Wadhwani told the Adobe Max audience. “We approach generative technology from a creator-focused perspective. We believe AI should enhance human creativity, not replace it, and should benefit creators, not replace them.”

This somehow explains why Adobe has, so far, only gone beyond the “neural background filter” in Photoshop. But that’s also only part of the story.

The long game

Despite being a giant of creative apps, Adobe is still very innovative – just check out some of the projects at Adobe Labs (Opens in a new tab)especially those that can transform real-world objects into 3D digital assets.

But Adobe is also vulnerable to being shocked by fast-moving competitors. The likes of Photoshop and Lightroom are designed as desktop-first tools, which means Canva has stolen a crawl for its easy-to-use, cloud-based design tools. This is why Adobe invested $20 billion figma Last month, a number more than what Facebook paid for WhatsApp in 2014.

Laptop screen showing various AI-generated images with Dall-E 2

(Image credit: Microsoft)

Does the same thing happen with the likes of Dall-E and Midjourney? Quite possibly, as Microsoft just announced that Dall-E 2 will be integrated into its new graphic design app (above), which is part of the Productivity 365 suite. AI image generators are heading into the mainstream, despite Adobe’s skepticism about how quickly that’s happening. have it.

However, Adobe also has a point about the ethical issues surrounding this great new technology. A large body of copyright is growing with the advent of AI image creation — and it is understandable that one of the founders of the Content Authentication Initiative (ICA), designed to tackle deepfakes and other manipulated content, might refuse to do all of generative AI.

However, Adobe Max 2022 and the arrival of the ‘neural background filter’ show that AI image creation will undoubtedly be a huge part of Photoshop, Lightroom, and image editing in general – it may take longer to appear in your favorite Adobe application.

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version