The Trail of Bits Blog asserts that they successfully utilized the image scaling mechanisms employed by AI systems such as Gemini to handle images included in its prompts. This enabled the team to deliver a set of concealed directives to the AI, which subsequently retrieved information from a Google Calendar account and sent it to themselves — all without notifying the user.
Image scaling exploits of this nature were once more prevalent, and the researchers point out that they “were utilized for model backdoors, evasion, and poisoning mainly against older computer vision systems that imposed a fixed image size.” Although this type of attack has diminished in frequency, it appears that a comparable technique can be utilized to transmit concealed instructions to a large language model like Google’s Gemini, which raises issues regarding AI safety as Gemini and similar AIs integrate into our households and potentially evolve beyond our understanding.
An exploit of this kind functions because LLMs like GPT-5 and Gemini inherently downscale high-resolution images for more rapid and efficient processing. However, this downscaling is the mechanism through which the researchers capitalized on the AI to relay hidden instructions to the chatbot. While the specific methodology may vary depending on the system — as each one utilizes a different image resampling algorithm — they all generate what the researchers refer to as “aliasing artifacts” that can conceal patterns within an image. These patterns only emerge when the image is downscaled, becoming more apparent due to the artifacting.
In the illustration provided by the researchers, the image submitted to Gemini contains areas of a black background that shift to red during the resampling phase. This transformation reveals concealed text containing instructions when the image is