As we saw in last week’s article, the ability of artificial intelligence (AI)-based technologies to create fake images and videos has advanced by leaps and bounds.
This artificially generated content, known as “deepfakes,” can appear extremely realistic, making it increasingly difficult for the human eye to separate fact from fiction. However, there are certain clues and methods that can help professionals identify them.
Metadata analysis to detect deep fakes
Metadata analysis is one of the first and most fundamental techniques used to verify the authenticity of an image or video. Metadata is a kind of “fingerprint” that accompanies digital files and contains information about their date and time of creation, camera used, geolocation, characteristics, and modifications.
For visual content verification professionals, understanding and analyzing this data can be the key to detecting manipulations such as those discussed below:
- Inconsistencies in the date and time of creation: One of the clearest indications of manipulation is a discrepancy between the purported date of capture of the image or video and the date recorded in the metadata. For example, if an image claims to have been taken during a specific event, but the metadata shows that it was created recently, this could indicate that the image has been faked or edited to change its original context.
- Device information and technical settings: If the metadata indicates that an image was taken with a device that did not exist on the claimed date of the shot, or if the technical settings do not match the conditions under which the photo was supposed to have been taken, this may raise suspicions. In addition, some cameras and cell phones add unique identifiers that can track whether or not an image has been tampered with.
- Geolocation: The GPS coordinates stored in the metadata can be checked to see if they match the location in the image or video. If an image shows a specific location, but the metadata shows a different location, this may indicate that the image has been manipulated or that a stock image has been used out of context.
- Track edits and manipulations: The metadata may include an edit history that indicates if the file has been modified since it was originally created. This can include identifying the software used to edit the image or video. So if a supposedly “original” image shows signs of having been edited, this can be a strong indication of tampering.
Examination of visual inconsistencies
Although deepfakes may appear realistic at first glance, they often contain subtle errors. Some key areas to inspect include:
- Shadows and reflections: Misaligned shadows or non-existent reflections can be signs that the image has been altered.
- Fuzzy edges: Edges between objects and their background may be less defined in AI-generated images.
- Facial anomalies: In deepfake videos, inconsistencies in lip sync, strange eye movements or facial expressions that do not match the emotion shown can often be detected.
Deepfake detection tools
There are specialized tools developed to identify deepfakes. These tools analyze patterns in images and videos that are characteristic of AI manipulation. Some of the most widely used are:
- Deepware Scanner: Allows you to scan videos and detect the presence of deepfakes using advanced machine learning algorithms.
- Forensically: This tool allows detailed forensic analysis of images, detecting irregularities in pixels, structure and compression patterns.
- ExifTool: An open source tool for extracting, modifying, and analyzing metadata from image and video files. It is widely used by researchers and journalists to verify the authenticity of multimedia files.
- Jeffrey’s Image Metadata Viewer: An online tool that allows easy viewing of image metadata. It is useful for quick review without the need to install specialized software.
- FotoForensics: An online service that, in addition to analyzing metadata, provides advanced image forensics tools, such as detecting file structure errors or identifying manipulations.
Marketing deepfakes
In the world of marketing, GenAI technology has opened up new and controversial possibilities, allowing brands to digitally resurrect iconic figures to star in commercials and advertising campaigns.
This use of artificial intelligence has sparked both admiration and ethical debate, especially when it comes to representing deceased celebrities, as in the case of Spanish artist Lola Flores.
Benefits of using deepfakes in marketing
- Emotional resonance: Using deceased celebrities in advertising can have a powerful emotional impact on audiences. These figures often have a significant cultural legacy, allowing brands to connect with audiences on a deeper level by evoking nostalgia and admiration.
- Innovation and differentiation: Campaigns using deepfakes are seen as innovative, which can differentiate a brand in a saturated market. This approach creates an element of surprise that can capture the public’s attention and generate conversations in both traditional media and social networks.
- Revive legacies: By bringing back iconic figures, brands also help preserve and revive their legacies. This not only benefits the brand, but can also have a positive impact on society’s collective memory.
Impersonation and the dangers of deepfakes
In addition to revolutionizing marketing and advertising, GenAI technology poses serious phishing and digital security threats. As these technologies become more accessible and sophisticated, the risks associated with the creation and distribution of fake content are growing, affecting not only public figures, but ordinary people as well.
Deepfakes allow cybercriminals to create videos, images, or audio that show a person saying or doing things that never happened. This fake content can be used to commit fraud or scams, to defame or discredit a person, or as a method of extortion.
On the other hand, there is a risk that governments will use this technology as a pretext to discredit authentic images and videos showing abuses or critical situations in conflict zones. By labeling real evidence of atrocities or human rights violations as deepfakes, governments could attempt to confuse public opinion and discredit journalists, activists, and witnesses who seek to expose the truth.
This disinformation strategy would not only undermine trust in legitimate sources of information, but could also hinder accountability and prolong conflicts, as allegations and evidence could be easily dismissed as digital fabrications, preventing the international community from taking appropriate action based on verifiable facts.
AI-generated” image labeling
Instagram’s recent Meta update, which introduced the “Created with AI” tag to posts, has sparked controversy due to its inaccurate application. Although Meta was originally intended to help users identify AI-generated content, the tool is incorrectly tagging real images that have simply been retouched, such as by adjusting the lighting. This has led to artists and photographers expressing frustration at seeing their work incorrectly labeled as AI-generated.
As a result, these types of errors not only affect the perception of creative work, but also raise questions about the reliability of automated detection tools.
Meta has acknowledged the problem and has begun changing the label to “AI information” while they work to improve the system, but the situation highlights the challenges and limitations of current technologies in distinguishing between authentic content and that which has been generated or altered by artificial intelligence.
Conclusion
The ability to identify fake AI-generated videos and photos is an increasingly important skill, as the use of metadata analysis, specialized tools, and an understanding of visual inconsistencies can significantly reduce the risk of professionals being fooled by manipulated content.
Deepfakes are a technology with considerable potential for both good and evil. While they can be a powerful tool in areas such as marketing and entertainment, they also open the door to a number of dangers that threaten privacy, security, and trust in information. Impersonation, fraud, misinformation, and security threats are just some of the risks associated with deepfakes.
Addressing these challenges requires a collaborative effort between governments, companies, technology platforms, and users to mitigate the dangers that deepfakes pose to society.
Resources:
[1] Meta – Labeling AI-Generated Images on Facebook, Instagram and Threads
Want to keep reading about AI? Don’t miss these resources!
At Block&Capital, we strive to create an environment where growth and success are accessible to all. If you’re ready to take your career to the next level, we encourage you to join us.
Last posts