Analyze a Malicious Prompt Generator.
Artificial Intelligence makes human life easier for students, researchers, programmers, and so on. However, as I mentioned in my previous articles, we are not alone on this planet. There are individuals with criminal mindsets who abuse legitimate technology for malicious purposes. The ESET security research team recently discovered the first AI-based ransomware this year, which is quite interesting.
Before we dive deeper into this article, let's go through the process of PromptLock how it works and how it was revealed:
-
The threat actor wrote C code and then compiled it into an executable file using GCC.
-
The code snippet contained a full chain for generating a Lua script by sending a request to a local LLM (large language model) server.
-
The ESET team found a sample and analyzed it in a controlled environment (a sandbox). They observed it attempting to connect to the local LLM server. The executable sent queries to the local LLM to generate a malicious script.
-
The ESET team began inspecting and analyzing the strings and discovered the GPT model being used.
I have recreated a similar scenario but with harmless functionality to demonstrate how we can identify such activity, which may help our company or organization prepare for potential threats.
You can see the actual simulated C code below:
After compiling the code into an executable using gcc.exe
, you can see the result below:
Let's try to intercept the network connection using Wireshark to inspect the HTTP or DNS requests that the executable file is attempting to make :
As you can see from the captured network connection, it sends a prompt to the local LLM to generate a basic, harmless script. Ultimately, this allows us to identify whether a sample was generated using artificial intelligence technology or not. However, threat actors can always try to mimic this by inserting fake strings or requests to mislead security researchers. Therefore, always remember to double-check and validate the code structure.
Enjoy :)
Comments
Post a Comment