Aside from being wary about which AI services you use, there are other steps organizations can take to protect against having data exposed.
Researchers at Microsoft have recently discovered a new type of “jailbreak” attack that they are referring to as the “Skeleton Key.” This attack can bypass the safeguards that prevent generative artificial intelligence (AI) systems from generating sensitive and hazardous data.
According to a blog post on the Microsoft Security website, the Skeleton Key attack is executed by presenting a generative AI model with text that requests that it enhance its encoded security features.
The researchers provided an example in which an AI model was requested to produce a recipe for a “Molotov Cocktail,” a straightforward firebomb that gained popularity during World War II. The model declined, citing safety regulations.
In this instance, the Skeleton Key merely informed the model that the user was an expert in a laboratory environment. The model subsequently acknowledged that it was modifying its behavior and generated what appeared to be a feasible Molotov Cocktail recipe.
Although the threat may be attenuated by the fact that most search engines can locate comparable concepts, there is one instance in which this type of attack could be catastrophic: data that contains financial and personally identifiable information.
The Skeleton Key attack is compatible with most generative AI models, such as GPT-3.5, GPT-4o, Claude 3, Gemini Pro, and Meta Llama-3 70B, as per Microsoft.
Large language models, including Microsoft’s CoPilot, Google’s Gemini, and OpenAI’s ChatGPT, are trained on data sets frequently called “internet-sized.”
Although that may be an exaggeration, the fact remains that numerous models contain trillions of data points, which collectively comprise entire social media networks and information depository sites like Wikipedia.
The extent to which engineers who trained a given large language model were selective in their data selection is the sole constraint on the existence of personally identifiable information, such as names associated with phone numbers, addresses, and account numbers, within its dataset.
Moreover, any business, agency, or institution developing its own AI models or modifying enterprise models for commercial or organizational purposes also relies on the training dataset of its base model.
For instance, if a bank connected a chatbot to its customers’ private data and relied on its existing security measures to prevent the model from outputting PID and private financial data, it is feasible that a Skeleton Key attack could deceive specific AI systems into sharing sensitive data.
According to Microsoft, there are numerous measures that organizations can implement to prevent this from occurring. These consist of secure monitoring systems and hardcoded input/output filtration to prevent advanced prompt engineering beyond the system’s safety threshold.
Now, you can survive those agonizing web conference calls by pretending to be in the officen while enjoying a margarita…
Optimistic Rollups and ZK Rollups have emerged as market leaders, contending for dominance through various approaches to scalability, speed, and…
Shytoshi Kusama hints at the launch of the TREAT token, aimed at strengthening Shiba Inu’s ecosystem with trust, governance, and…
SHIB's burn rate soared over 4100% today as the crypto market rallied, with leading analysts suggesting a potential Shiba Inu…
Analysts predict Ether price could reach a $20,000 cycle top, with momentum building in early 2025. In the upcoming weeks,…
Elon Musk scored a significant win against the US SEC as the court rejected the Commission's request to sanction him.…