June 20, 2024
In an effort to address growing concerns about the manipulation of images through artificial intelligence (AI), Apple has introduced a new feature called Apple Intelligence. This feature not only allows users to create new emoji, edit photos, and generate images from text, but it also adds code to each image to indicate that it was generated using AI technology. Apple’s commitment to transparency is evident in their decision to mark the metadata of altered images, including seemingly simple edits like removing background objects.
During a recent podcast with prominent blogger John Gruber, Apple’s senior vice-president of software engineering, Craig Federighi, emphasized the company’s stance on not creating technology that generates realistic images of people or places. By adding information to images touched by their AI, Apple joins a growing list of companies, including TikTok, OpenAI, Microsoft, and Adobe, that are striving to help users identify manipulated content.
However, despite these efforts, media and information experts warn that the issue of manipulated images is likely to worsen, especially in the lead-up to the contentious 2024 US presidential election. The term “slop” has gained popularity in describing the realistic lies and misinformation created by AI. The availability of user-friendly AI tools for generating text, videos, and audio has made it easier for people to create deceptive content without requiring much technical knowledge.
While AI-created content has become increasingly believable, some of the tech industry’s biggest players have experienced notable failures in this domain. Google, for instance, faced a significant mishap when its AI Overview summaries attached to search results started providing incorrect and potentially harmful information. One particularly alarming suggestion was adding glue to pizza to prevent cheese from slipping off.
In contrast, Apple has taken a more cautious approach to AI implementation. The company plans to offer its AI tools in a public “beta” test later this year, indicating that they are committed to refining and perfecting their technology before making it widely available. Additionally, Apple has partnered with the leading startup OpenAI to enhance the capabilities of their iPhones, iPads, and Mac computers.
As the development and accessibility of AI continue to progress, it is crucial for companies to prioritize transparency and accountability. By marking AI-generated content and adding metadata, Apple and other tech giants are taking steps to combat the spread of manipulated images. However, it remains an ongoing challenge to stay ahead of AI’s ability to create deceptive content. The efforts taken by companies like Apple are a critical part of the ongoing battle to ensure that the images we see are a true reflection of reality.
Use the share button below if you liked it.