Google has added new AI-powered features for picture editing and image production.

September 14, 2024
Harsh Gautam

The firm unveiled its new Pixel 9 phone series at Made by Google 2024 on Tuesday. Aside than Gemini serving as the default assistant, these gadgets contain a plethora of AI-powered functions.

The firm is expanding its photo editing features, as well as new apps for storing and searching screenshots on-device, and an AI-powered studio for image development.

The Add Me feature allows the person shooting the group photo to participate in it. The startup employs a combination of augmented reality and several machine learning algorithms, and after shooting the first photo, the photographer is asked to swap places with someone else. The phone will direct the second person to reset the image, and AI models will realign both frames to produce a photo with everyone in one frame.

Last year, the company introduced the Magic Editor feature with the Pixel 8 and 8 Pro, which included a Magic Eraser capability for removing undesired objects or persons. Google is introducing two new capabilities to Magic Editor with the Pixel 9 series.

The first feature is called auto framing, which recomposes an image to bring items or people in the photo into focus. Google says Magic Editor will produce a few possibilities for users to select from. The autoframing tool can also use generative AI to enlarge the image. Google claims that with the second function, users may enter in the type of background they want to see in their photos, and AI will generate it for them.

New screenshots and studio applications.

Google is introducing new screenshots and Pixel Studio apps for the new Pixel 9 phone series. The screenshot software saves screenshots taken on the iPhone and allows users to search for information in them, such as Wi-Fi data for a holiday home.

Notably, Google Photos has a search capability that allows users to look up information such as their license plate or passport number. However, the new screenshot app only works locally.

The business is also introducing a new Pixel Studio app for creating AI-powered photos on the gadget. Google adds that the new apps use both an on-device dispersion model and Google's on-cloud models. Users can make an image by typing in any question and then changing the style within the app. Google says it can't generate human faces yet, probably due to Gemini's historical accuracy lapse earlier this year, but it doesn't indicate if there are any other constraints to generating potentially dangerous images.