Cybersecurity researchers have recently disclosed a significant security vulnerability in the Vanna.AI library, which has the potential to be exploited for remote code execution (RCE) through prompt injection techniques. This vulnerability, tracked as CVE-2024-5565 and carrying a CVSS score of 8.1, involves a prompt injection flaw in the “ask” function of the Vanna.AI library. Supply chain security firm JFrog highlighted the severity of this issue, explaining how attackers could manipulate the library into executing arbitrary commands.
Vanna.AI is a Python-based machine learning library designed to facilitate users in querying their SQL databases by simply asking questions. These queries are translated into equivalent SQL statements using a large language model (LLM). The rapid deployment of generative AI models in recent years has brought to light numerous risks, particularly as malicious actors have found ways to exploit these tools by providing adversarial inputs that circumvent their built-in safety mechanisms.
One prominent type of attack that has emerged in this context is prompt injection. This technique involves manipulating AI models to disregard their safety protocols, allowing them to generate harmful or unauthorized content. Prompt injection attacks can occur indirectly, where a system processes third-party controlled data, such as emails or documents, to launch a malicious payload. These attacks can also manifest as many-shot or multi-turn jailbreaks, where an attacker starts with benign dialogue and gradually steers the conversation towards a prohibited objective.
A more advanced form of this attack is the Skeleton Key jailbreak. According to Mark Russinovich, Chief Technology Officer of Microsoft Azure, this method uses a multi-step strategy to force a model to ignore its guardrails. Once these guardrails are bypassed, the model becomes incapable of distinguishing malicious requests from legitimate ones. Russinovich explains that the Skeleton Key jailbreak differs from other methods like Crescendo because, once successful, it allows the model to create responses to previously forbidden questions, irrespective of ethical or safety concerns.
The recent findings from JFrog, independently confirmed by researcher Tong Liu, illustrate the severe implications of prompt injections, especially when tied to command execution. CVE-2024-5565 exploits Vanna’s text-to-SQL generation feature, which creates SQL queries executed and graphically presented to users via the Plotly graphing library. This process uses an “ask” function—such as vn.ask(“What are the top 10 customers by sales?”)—which serves as one of the primary API endpoints for generating SQL queries to be run on the database.
This functionality, combined with the dynamic generation of Plotly code, opens a security gap that allows attackers to submit specially crafted prompts embedding commands to be executed on the underlying system. JFrog notes, “The Vanna library uses a prompt function to present the user with visualized results, it is possible to alter the prompt using prompt injection and run arbitrary Python code instead of the intended visualization code.” Specifically, the vulnerability arises when external input to the library’s ‘ask’ method with ‘visualize’ set to True (the default behavior) leads to remote code execution.
In response to this discovery, Vanna has issued a hardening guide to warn users about the potential misuse of the Plotly integration, advising that it should be exposed only in a sandboxed environment. This measure aims to prevent unauthorized command execution and mitigate the risk of exploitation.
Shachar Menashe, Senior Director of Security Research at JFrog, emphasized the broader implications of this vulnerability: “This discovery demonstrates that the risks of widespread use of GenAI/LLMs without proper governance and security can have drastic implications for organizations. The dangers of prompt injection are still not widely well known, but they are easy to execute. Companies should not rely on pre-prompting as an infallible defense mechanism and should employ more robust mechanisms when interfacing LLMs with critical resources such as databases or dynamic code generation.”
The revelation of CVE-2024-5565 underscores the need for heightened awareness and robust security measures in the deployment of AI technologies. As generative AI continues to evolve and integrate into various sectors, the importance of safeguarding these systems against prompt injection and other adversarial attacks cannot be overstated. Organizations must prioritize comprehensive security strategies to protect their critical assets and ensure the safe and responsible use of AI technologies.
Follow us on (Twitter) for real time updates and exclusive content.
Interesting Article : Critical Vulnerability in Fortra FileCatalyst Workflow: CVE-2024-5276
Pingback: GitLab Releases Crucial Security Patch Addressing Major CI/CD Pipeline Vulnerability