Detecting Deepfakes Through Metadata
Artificial intelligences are gradually becoming our partners in daily tasks. It’s actually a great tool if people know how to use it, but we must remember we are not alone in the digital space there are criminals too. Artificial intelligence includes technologies such as deepfakes, voice cloning, etc. What is a deepfake? According to Wikipedia, deepfakes are images, videos, or audio that have been edited or generated using artificial intelligence, AI-based tools, or audio–video editing software.
However, threat actors are using this new technology and subtechnologies of artificial intelligence for many reasons, such as:
-
Malware development
-
Social engineering
We have already seen the first AI-powered ransomware revealed by ESET Internet Security, and there have been many breaches and “shadow-IT” workers who even passed interviews using deepfake technology. In this article I will show how we can analyze and detect AI-generated images by examining metadata only, rather than relying on online platforms (although those can sometimes be useful).
I generated an image using a ChatGPT prompt and then downloaded ExifTool from the internet. After that, I typed the command exiftool.exe sample.jpg (or sample.png) and pressed Enter. The metadata information then appeared on the screen :
It may seem like a simple method, but it can actually be very useful for indicating whether an image was generated by artificial intelligence. However, always remember that threat actors can alter the metadata or, in simple terms, tamper with it by adding invalid information to mislead us as security researchers .
PoC Removed AI-Generated Artifacts :
Enjoy :)
Comments
Post a Comment