Apple continues to introduce advanced technologies that meet the needs of users, but there is always room for more innovation. From AI improvements to improving the user experience, there are some features that can make Apple devices more distinctive and attractive.
I’m really excited about the release of Apple Intelligence, but I think there’s a lot of room for improvement. Apple could add a lot of features to make it even better. Here are my thoughts on what could take Apple’s AI suite to the next level. Check out my experience with macOS Sequoia and Apple Intelligence: My Initial Impressions.
1. More photo editing tools
Apple Intelligence offers one major photo editing feature called Clean Up, which is similar to Google’s Magic Eraser, allowing you to remove unwanted objects from photos. While it’s a welcome addition, it doesn’t feel groundbreaking, especially since Google and Samsung have offered similar tools for quite some time now.
Aside from Clean Up, Apple Intelligence offers very little in the way of photo editing tools. By contrast, the Google Pixel 9 has a number of impressive AI features, such as Add Me, which ensures everyone is included in group photos, or Reimagine, which lets you replace parts of an image by simply describing it with a text prompt. It would be great if Apple could take a cue from Google and offer similar features.
As someone who isn’t very skilled at photo editing, I would love a feature that would let me create filter effects based on a text prompt. I could describe the colors I want to stand out the most or the type of vibe I’m going for, and the AI model would create a filter that matches that description.
2. Create more realistic images
Apple also introduced a new app called Image Playground as part of Apple Intelligence, which allows users to create images from a text prompt in three different art styles: animation, illustration, and sketch. It integrates seamlessly with apps like Messages and even third-party platforms. While the implementation is good, I’m not a fan of the results.
The art styles look a bit too cartoonish for my taste, and I can’t see myself using Image Playground to create images and send them to friends or family. While the template works well with Genmojis, which lets you create entirely new custom emojis via a text prompt, there should be more realistic art styles available.
One possible reason for this is that the image publishing model runs on the device to improve privacy. However, I wouldn’t mind a more realistic image generation model that runs on Apple’s own cloud to handle the higher computational demands, and which also deletes all your data after processing your requests. Check out the various Apple devices that support Apple Intelligence.
3. Call screening
One of my favorite features of the Google Pixel is call screening, where Google Assistant answers calls for you and provides a live transcript, helping you decide whether you want to take the call. It can even answer calls from numbers you don’t know, and if it detects a robocall or spam call, Google Assistant will automatically hang up without bothering you at all.
It would be great if Siri could do something similar and generate automated responses based on context. For example, if your iPhone knows you’re out and about, Siri could automatically ask the delivery person to leave the package at your door.
Unfortunately, Apple Intelligence is currently limited to generating a transcript and providing a summary of a phone call, but this is an area that Apple should consider expanding in. Check out how to prevent robocalls from calling your number again.
4. Better Live Translation Features
While you can use the built-in Translate app for basic tasks, like typing text and having it read aloud in another language, I can't help but feel that Apple Intelligence could contribute more.
What I’d really like to see are real-time translation tools that work at the system level. A great example is Samsung’s Live Translate app, which can transcribe and translate conversations in real time during phone calls. Google offers similar features that work seamlessly across multiple apps, with all the processing happening on the device.
Given that both Samsung and Google have already done this, and their models work well on-device, I see no reason why Apple wouldn’t focus on translation features with Apple Intelligence. Check out the best Galaxy AI features Samsung devices offer (and how to use them).
5. Option to choose a third-party LLM model
While Siri has gotten a major upgrade with features like on-screen awareness, it still may not be able to handle every request. To fill these gaps, it uses ChatGPT as an alternative to generate responses or answer questions about images or documents.
While ChatGPT is great, I wish I could choose which third-party LLM model I want to use, similar to how you can change your default search engine. We’ve already seen ChatGPT alternatives that excel at specific tasks. It would be even better if users could set preferences for different tasks; for example, automatically using Claude for image-related questions but switching to Gemini or ChatGPT for text generation.
So, those are the features I’d like to see in Apple Intelligence. However, there’s still a lot to look forward to as we see how Apple’s AI stacks up against Google and Samsung’s offerings. While it’s not yet publicly available, you can try out Apple Intelligence in the iOS 18.1 and macOS 15.1 betas. Just remember that your experience may not be entirely stable, as these are still early betas. You can read more about why I don’t need generative AI in every app I use now.
Get IPTV Free Trial Now