As the year 2025 came to a close, Instagram head Adam Mosseri expressed his concerns about the impact of AI on authenticity. In a post, Mosseri noted that the ability to create realistic content using AI tools is becoming increasingly accessible, making it difficult to distinguish between real and fake content.
According to Mosseri, the solution to this problem lies in finding a way to label real media. He proposed a system where camera manufacturers would cryptographically sign images at the time of capture, creating a chain of custody that would help determine what is real and what is not. This would result in a trustworthy system for identifying AI-generated content. Despite the efforts of AI providers, progress towards reliable deepfake labelling technology has been slow.
The impact of AI-generated content on social media platforms like Instagram is likely to continue to be a major concern in the future. As AI technology continues to evolve, it will be important for companies like Instagram, as well as other major players in the tech industry such as Nvidia, Ring, and OpenAI, to find effective solutions to address the issue of authenticity and trust in online content. What happens next will depend on the ability of these companies to develop and implement reliable technologies for identifying and labelling AI-generated content.

















Leave a Reply