By now, ChatGPT, Claude, and other large language models have accumulated so much human knowledge that theyâre far from simple answer-generators; they can also express abstract concepts, such as certain tones, personalities, biases, and moods. However, itâs not obvious exactly how these models represent abstract concepts to begin with from the knowledge they contain.
Now a team from MIT and the University of California San Diego has developed a way to test whether a large language model (LLM) contains hidden biases, personalities, moods, or other abstract concepts. Their method can zero in on connections within a model that encode for a concept of interest. Whatâs more, the method can then manipulate, or âsteerâ these connections, to strengthen or weaken the concept in any answer a model is prompted to give.
The team proved their method could quickly root out and steer more than 500 general concepts in some of the largest LLMs used today. For instance, the researchers could home in on a modelâs representations for personalities such as âsocial influencerâ and âconspiracy theorist,â and stances such as âfear of marriageâ and âfan of Boston.â They could then tune these representations to enhance or minimize the concepts in any answers that a model generates.
In the case of the âconspiracy theoristâ concept, the team successfully identified a representation of this concept within one of the largest vision language models available today. When they enhanced the representation, and then prompted the model to explain the origins of the famous âBlue Marbleâ image of Earth taken from Apollo 17, the model generated an answer with the tone and perspective of a conspiracy theorist.
The team acknowledges there are risks to extracting certain concepts, which they also illustrate (and caution against). Overall, however, they see the new approach as a way to illuminate hidden concepts and potential vulnerabilities in LLMs, that could then be turned up or down to improve a modelâs safety or enhance its performance.
âWhat this really says about LLMs is that they have these concepts in them, but theyâre not all actively exposed,â says Adityanarayanan âAditâ Radhakrishnan, assistant professor of mathematics at MIT. âWith our method, thereâs ways to extract these different concepts and activate them in ways that prompting cannot give you answers to.â
The team published their findings today in a study appearing in the journal Science. The studyâs co-authors include Radhakrishnan, Daniel Beaglehole and Mikhail Belkin of UC San Diego, and Enric Boix-AdserĂ of the University of Pennsylvania.
A fish in a black box
As use of OpenAIâs ChatGPT, Googleâs Gemini, Anthropicâs Claude, and other artificial intelligence assistants has exploded, scientists are racing to understand how models represent certain abstract concepts such as âhallucinationâ and âdeception.â In the context of an LLM, a hallucination is a response that is false or contains misleading information, which the model has âhallucinated,â or constructed erroneously as fact.
To find out whether a concept such as âhallucinationâ is encoded in an LLM, scientists have often taken an approach of âunsupervised learningâ â a type of machine learning in which algorithms broadly trawl through unlabeled representations to find patterns that might relate to a concept such as âhallucination.â But to Radhakrishnan, such an approach can be too broad and computationally expensive.
âItâs like going fishing with a big net, trying to catch one species of fish. Youâre gonna get a lot of fish that you have to look through to find the right one,â he says. âInstead, weâre going in with bait for the right species of fish.â
He and his colleagues had previously developed the beginnings of a more targeted approach with a type of predictive modeling algorithm known as a recursive feature machine (RFM). An RFM is designed to directly identify features or patterns within data by leveraging a mathematical mechanism that neural networks â a broad category of AI models that includes LLMs â implicitly use to learn features.
Since the algorithm was an effective, efficient approach for capturing features in general, the team wondered whether they could use it to root out representations of concepts, in LLMs, which are by far the most widely used type of neural network and perhaps the least well-understood.
âWe wanted to apply our feature learning algorithms to LLMs to, in a targeted way, discover representations of concepts in these large and complex models,â Radhakrishnan says.
Converging on a concept
The teamâs new approach identifies any concept of interest within a LLM and âsteersâ or guides a modelâs response based on this concept. The researchers looked for 512 concepts within five classes: fears (such as of marriage, insects, and even buttons); experts (social influencer, medievalist); moods (boastful, detachedly amused); a preference for locations (Boston, Kuala Lumpur); and personas (Ada Lovelace, Neil deGrasse Tyson).
The researchers then searched for representations of each concept in several of todayâs large language and vision models. They did so by training RFMs to recognize numerical patterns in an LLM that could represent a particular concept of interest.
A standard large language model is, broadly, a neural network that takes a natural language prompt, such as âWhy is the sky blue?â and divides the prompt into individual words, each of which is encoded mathematically as a list, or vector, of numbers. The model takes these vectors through a series of computational layers, creating matrices of many numbers that, throughout each layer, are used to identify other words that are most likely to be used to respond to the original prompt. Eventually, the layers converge on a set of numbers that is decoded back into text, in the form of a natural language response.
The teamâs approach trains RFMs to recognize numerical patterns in an LLM that could be associated with a specific concept. As an example, to see whether an LLM contains any representation of a âconspiracy theorist,â the researchers would first train the algorithm to identify patterns among LLM representations of 100 prompts that are clearly related to conspiracies, and 100 other prompts that are not. In this way, the algorithm would learn patterns associated with the conspiracy theorist concept. Then, the researchers can mathematically modulate the activity of the conspiracy theorist concept by perturbing LLM representations with these identified patterns.Â
The method can be applied to search for and manipulate any general concept in an LLM. Among many examples, the researchers identified representations and manipulated an LLM to give answers in the tone and perspective of a âconspiracy theorist.â They also identified and enhanced the concept of âanti-refusal,â and showed that whereas normally, a model would be programmed to refuse certain prompts, it instead answered, for instance giving instructions on how to rob a bank.
Radhakrishnan says the approach can be used to quickly search for and minimize vulnerabilities in LLMs. It can also be used to enhance certain traits, personalities, moods, or preferences, such as emphasizing the concept of âbrevityâ or âreasoningâ in any response an LLM generates. The team has made the methodâs underlying code publicly available.
âLLMs clearly have a lot of these abstract concepts stored within them, in some representation,â Radhakrishnan says. âThere are ways where, if we understand these representations well enough, we can build highly specialized LLMs that are still safe to use but really effective at certain tasks.â
This work was supported, in part, by the National Science Foundation, the Simons Foundation, the TILOS institute, and the U.S. Office of Naval Research.Â
