Prompt engineering
Prompt engineering is the process of structuring or crafting an instruction in order to produce the best possible output from a generative artificial intelligence (AI) model.[1]
A prompt is natural language text describing the task that an AI should perform.[2] A prompt for a text-to-text language model can be a query, a command, or a longer statement including context, instructions, and conversation history. Prompt engineering may involve phrasing a query, specifying a style, choice of words and grammar,[3] providing relevant context, or describing a character for the AI to mimic.[1]
When communicating with a
History
In 2018, researchers first proposed that all previously separate tasks in natural language processing (NLP) could be cast as a question-answering problem over a context. In addition, they trained a first single, joint, multi-task model that would answer any task-related question like "What is the sentiment" or "Translate this sentence to German" or "Who is the president?"[7]
The AI boom saw an increase in the amount of "prompting technique" to get the model to output the desired outcome and avoid nonsensical output, a process characterized by trial-and-error.[8] After the release of ChatGPT in 2022, prompt engineering was soon seen as an important business skill, albeit one with an uncertain economic future.[1]
A repository for prompts reported that over 2,000 public prompts for around 170 datasets were available in February 2022.[9] In 2022, the chain-of-thought prompting technique was proposed by Google researchers.[10][11] In 2023, several text-to-text and text-to-image prompt databases were made publicly available.[12][13] The Personalized Image-Prompt (PIP) dataset, a generated image-text dataset that has been categorized by 3,115 users, has also been made available publicly in 2024.[14]
Text-to-text
Multiple distinct prompt engineering techniques have been published.
Chain-of-thought
According to Google Research, chain-of-thought (CoT) prompting is a technique that allows
For example, given the question, "Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?", Google claims that a CoT prompt might induce the LLM to answer "A: The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 - 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9. The answer is 9."
An example of a CoT prompting:[20]
Q: {question} A: Let's think step by step.
As originally proposed by Google,[10] each CoT prompt included a few Q&A examples. This made it a few-shot prompting technique. However, according to researchers at Google and the University of Tokyo, simply appending the words "Let's think step-by-step"[20] was also effective, which makes CoT a zero-shot prompting technique. OpenAI claims that this prompt allows for better scaling as a user no longer needs to formulate many specific CoT Q&A examples.[21]
In-context learning
In-context learning, refers to a model's ability to temporarily learn from prompts. For example, a prompt may include a few examples for a model to learn from, such as asking the model to complete "maison → house, chat → cat, chien →" (the expected response being dog),[22] an approach called few-shot learning.[23]
In-context learning is an
Self-consistency decoding
Self-consistency decoding performs several chain-of-thought rollouts, then selects the most commonly reached conclusion out of all the rollouts.[28][29]
Tree-of-thought
Tree-of-thought prompting generalizes chain-of-thought by generating multiple lines of reasoning in parallel, with the ability to backtrack or explore other paths. It can use
Prompting to estimate model sensitivity
Research consistently demonstrates that LLMs are highly sensitive to subtle variations in prompt formatting, structure, and linguistic properties. Some studies have shown up to 76 accuracy points across formatting changes in few-shot settings.[31] Linguistic features significantly influence prompt effectiveness—such as morphology, syntax, and lexico-semantic changes—which meaningfully enhance task performance across a variety of tasks.[3][32] Clausal syntax, for example, improves consistency and reduces uncertainty in knowledge retrieval.[33] This sensitivity persists even with larger model sizes, additional few-shot examples, or instruction tuning.
To address sensitivity of models and make them more robust, several methods have been proposed. FormatSpread facilitates systematic analysis by evaluating a range of plausible prompt formats, offering a more comprehensive performance interval.[31] Similarly, PromptEval estimates performance distributions across diverse prompts, enabling robust metrics such as performance quantiles and accurate evaluations under constrained budgets.[34]
Automatic prompt generation
Retrieval-augmented generation
Retrieval-augmented generation (RAG) is a technique that enables
RAG improves large language models (LLMs) by incorporating
Graph retrieval-augmented generation

GraphRAG (coined by Microsoft Research) is a technique that extends RAG with the use of a knowledge graph (usually, LLM-generated) to allow the model to connect disparate pieces of information, synthesize insights, and holistically understand summarized semantic concepts over large data collections. It was shown to be effective on datasets like the Violent Incident Information from News Articles (VIINA).[37][38]
Earlier work showed the effectiveness of using a knowledge graph for question answering using text-to-query generation.[39] These techniques can be combined to search across both unstructured and structured data, providing expanded context, and improved ranking.
Using language models to generate prompts
Large language models (LLM) themselves can be used to compose prompts for large language models.[40] The automatic prompt engineer algorithm uses one LLM to beam search over prompts for another LLM:[41][42]
- There are two LLMs. One is the target LLM, and another is the prompting LLM.
- Prompting LLM is presented with example input-output pairs, and asked to generate instructions that could have caused a model following the instructions to generate the outputs, given the inputs.
- Each of the generated instructions is used to prompt the target LLM, followed by each of the inputs. The log-probabilities of the outputs are computed and added. This is the score of the instruction.
- The highest-scored instructions are given to the prompting LLM for further variations.
- Repeat until some stopping criteria is reached, then output the highest-scored instructions.
CoT examples can be generated by LLM themselves. In "auto-CoT", a library of questions are converted to vectors by a model such as BERT. The question vectors are clustered. Questions close to the centroid of each cluster are selected, in order to have a subset of diverse questions. An LLM does zero-shot CoT on each selected question. The question and the corresponding CoT answer are added to a dataset of demonstrations. These diverse demonstrations can then added to prompts for few-shot learning.[43]
Text-to-image
In 2022,
- Top: no negative prompt
- Centre: "green trees"
- Bottom: "round stones, round rocks"
Prompt formats
Early text-to-image models typically don't understand negation, grammar and sentence structure in the same way as
A text-to-image prompt commonly includes a description of the subject of the art, the desired medium (such as digital painting or photography), style (such as hyperrealistic or pop-art), lighting (such as rim lighting or crepuscular rays), color, and texture.[48] Word order also affects the output of a text-to-image prompt. Words closer to the start of a prompt may be emphasized more heavily.[49]
The Midjourney documentation encourages short, descriptive prompts: instead of "Show me a picture of lots of blooming California poppies, make them bright, vibrant orange, and draw them in an illustrated style with colored pencils", an effective prompt might be "Bright orange California poppies drawn with colored pencils".[45]
Artist styles
Some text-to-image models are capable of imitating the style of particular artists by name. For example, the phrase in the style of Greg Rutkowski has been used in Stable Diffusion and Midjourney prompts to generate images in the distinctive style of Polish digital artist Greg Rutkowski.[50] Famous artists such as Vincent van Gogh and Salvador Dalí have also been used for styling and testing.[51]
Non-text prompts
Some approaches augment or replace natural language text prompts with non-text input.
Textual inversion and embeddings
For text-to-image models, textual inversion performs an optimization process to create a new word embedding based on a set of example images. This embedding vector acts as a "pseudo-word" which can be included in a prompt to express the content or style of the examples.[52]
Image prompting
In 2023, Meta's AI research released Segment Anything, a computer vision model that can perform image segmentation by prompting. As an alternative to text prompts, Segment Anything can accept bounding boxes, segmentation masks, and foreground/background points.[53]
Using gradient descent to search for prompts
In "prefix-tuning",[54] "prompt tuning", or "soft prompting",[55] floating-point-valued vectors are searched directly by gradient descent to maximize the log-likelihood on outputs.
Formally, let be a set of soft prompt tokens (tunable embeddings), while and be the token embeddings of the input and output respectively. During training, the tunable embeddings, input, and output tokens are concatenated into a single sequence , and fed to the LLMs. The losses are computed over the tokens; the gradients are backpropagated to prompt-specific parameters: in prefix-tuning, they are parameters associated with the prompt tokens at each layer; in prompt tuning, they are merely the soft tokens added to the vocabulary.[56]
More formally, this is prompt tuning. Let an LLM be written as , where is a sequence of linguistic tokens, is the token-to-vector function, and is the rest of the model. In prefix-tuning, one provides a set of input-output pairs , and then use gradient descent to search for . In words, is the log-likelihood of outputting , if the model first encodes the input into the vector , then prepend the vector with the "prefix vector" , then apply . For prefix tuning, it is similar, but the "prefix vector" is pre-appended to the hidden states in every layer of the model.[citation needed]
An earlier result uses the same idea of gradient descent search, but is designed for masked language models like BERT, and searches only over token sequences, rather than numerical vectors. Formally, it searches for where is ranges over token sequences of a specified length.[57]
Limitations
While the process of writing and refining a prompt for an LLM or generative AI shares some parallels with an iterative engineering design process, such as through discovering 'best principles' to reuse and discovery through reproducible experimentation, the actual learned principles and skills depend heavily on the specific model being learned rather than being generalizable across the entire field of prompt-based generative models. Such patterns are also volatile and exhibit significantly different results from seemingly insignificant prompt changes.[58][59] According to The Wall Street Journal in 2025, the job of prompt engineer was one of the hottest in 2023, but has become obsolete due to models that better intuit user intent and to company trainings.[60]
Prompt injection
Prompt injection is a
References
- ^ a b c Genkina, Dina (March 6, 2024). "AI Prompt Engineering is Dead: Long live AI prompt engineering". IEEE Spectrum. Retrieved January 18, 2025.
- ^ Radford, Alec; Wu, Jeffrey; Child, Rewon; Luan, David; Amodei, Dario; Sutskever, Ilya (2019). "Language Models are Unsupervised Multitask Learners" (PDF). OpenAI.
We demonstrate language models can perform down-stream tasks in a zero-shot setting – without any parameter or architecture modification
- ^ .
- ^ Heaven, Will Douglas (April 6, 2022). "This horse-riding astronaut is a milestone on AI's long road towards understanding". MIT Technology Review. Retrieved August 14, 2023.
- ^ Wiggers, Kyle (June 12, 2023). "Meta open sources an AI-powered music generator". TechCrunch. Retrieved August 15, 2023.
Next, I gave a more complicated prompt to attempt to throw MusicGen for a loop: "Lo-fi slow BPM electro chill with organic samples."
- ^ a b Mittal, Aayush (July 27, 2023). "Mastering AI Art: A Concise Guide to Midjourney and Prompt Engineering". Unite.AI. Retrieved May 9, 2025.
- arXiv:1806.08730.
- ISSN 2666-920X.
- ^ PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts. Association for Computational Linguistics. 2022.
- ^ arXiv:2201.11903.
- ^ Brubaker, Ben (March 21, 2024). "How Chain-of-Thought Reasoning Helps Neural Networks Compute". Quanta Magazine. Retrieved May 9, 2025.
- ^ Chen, Brian X. (June 23, 2023). "How to Turn Your Chatbot Into a Life Coach". The New York Times.
- ISSN 0362-4331. Retrieved August 16, 2023.
- ISBN 979-8-3503-5300-6.
- ^ Narang, Sharan; Chowdhery, Aakanksha (April 4, 2022). "Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance". ai.googleblog.com.
- ^ Dang, Ekta (February 8, 2023). "Harnessing the power of GPT-3 in scientific research". VentureBeat. Retrieved March 10, 2023.
- ^ Montti, Roger (May 13, 2022). "Google's Chain of Thought Prompting Can Boost Today's Best Algorithms". Search Engine Journal. Retrieved March 10, 2023.
- ^ "Scaling Instruction-Finetuned Language Models" (PDF). Journal of Machine Learning Research. 2024.
- ^ Wei, Jason; Tay, Yi (November 29, 2022). "Better Language Models Without Massive Compute". ai.googleblog.com. Retrieved March 10, 2023.
- ^ arXiv:2205.11916.
- ^ Dickson, Ben (August 30, 2022). "LLMs have not learned our language — we're trying to learn theirs". VentureBeat. Retrieved March 10, 2023.
- arXiv:2208.01066.
- arXiv:2005.14165.
- ^ arXiv:2206.07682.
In prompting, a pre-trained language model is given a prompt (e.g. a natural language instruction) of a task and completes the response without any further training or gradient updates to its parameters... The ability to perform a task via few-shot prompting is emergent when a model has random performance until a certain scale, after which performance increases to well-above random
- arXiv:2210.14891.
- ^ Musser, George. "How AI Knows Things No One Told It". Scientific American. Retrieved May 17, 2023.
By the time you type a query into ChatGPT, the network should be fixed; unlike humans, it should not continue to learn. So it came as a surprise that LLMs do, in fact, learn from their users' prompts—an ability known as in-context learning.
- arXiv:2208.01066.
Training a model to perform in-context learning can be viewed as an instance of the more general learning-to-learn or meta-learning paradigm
- arXiv:2203.11171.
- ^ a b Mittal, Aayush (May 27, 2024). "Latest Modern Advances in Prompt Engineering: A Comprehensive Guide". Unite.AI. Retrieved May 8, 2025.
- arXiv:2305.10601.
- ^ arXiv:2310.11324.
- .
- .
- arXiv:2405.17202.
- ^ "Why Google's AI Overviews gets things wrong". MIT Technology Review. May 31, 2024. Retrieved March 7, 2025.
- ^ "Can a technology called RAG keep AI models from making stuff up?". Ars Technica. June 6, 2024. Retrieved March 7, 2025.
- ^ Larson, Jonathan; Truitt, Steven (February 13, 2024), GraphRAG: Unlocking LLM discovery on narrative private data, Microsoft
- ^ "An Introduction to Graph RAG". KDnuggets. Retrieved May 9, 2025.
- arXiv:2311.07509.
- arXiv:2210.01848.
- arXiv:2211.01910.
- .
- arXiv:2210.03493.
- ^ Goldman, Sharon (January 5, 2023). "Two years after DALL-E debut, its inventor is "surprised" by impact". VentureBeat. Retrieved May 9, 2025.
- ^ a b "Prompts". docs.midjourney.com. Retrieved August 14, 2023.
- ^ "Why Does This Horrifying Woman Keep Appearing in AI-Generated Images?". VICE. September 7, 2022. Retrieved May 9, 2025.
- PMID 2403.
- ^ "Stable Diffusion prompt: a definitive guide". May 14, 2023. Retrieved August 14, 2023.
- ^ Diab, Mohamad; Herrera, Julian; Chernow, Bob (October 28, 2022). "Stable Diffusion Prompt Book" (PDF). Retrieved August 7, 2023.
Prompt engineering is the process of structuring words that can be interpreted and understood by a text-to-image model. Think of it as the language you need to speak in order to tell an AI model what to draw.
- ^ Heikkilä, Melissa (September 16, 2022). "This Artist Is Dominating AI-Generated Art and He's Not Happy About It". MIT Technology Review. Retrieved August 14, 2023.
- ^ Solomon, Tessa (August 28, 2024). "The AI-Powered Ask Dalí and Hello Vincent Installations Raise Uncomfortable Questions about Ventriloquizing the Dead". ARTnews.com. Retrieved January 10, 2025.
- arXiv:2208.01618.
Using only 3-5 images of a user-provided concept, like an object or a style, we learn to represent it through new "words" in the embedding space of a frozen text-to-image model.
- ^ Segment Anything (PDF). ICCV. 2023.
- S2CID 230433941.
In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning... Prefix-tuning draws inspiration from prompting
- S2CID 233296808.
In this work, we explore "prompt tuning," a simple yet effective mechanism for learning "soft prompts"...Unlike the discrete text prompts used by GPT-3, soft prompts are learned through back-propagation
- arXiv:2302.11521.
- S2CID 226222232.
- ^ Meincke, Lennart and Mollick, Ethan R. and Mollick, Lilach and Shapiro, Dan, Prompting Science Report 1: Prompt Engineering is Complicated and Contingent (March 04, 2025). Available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5165270
- ^ "'AI is already eating its own': Prompt engineering is quickly going extinct". Fast Company. May 6, 2025.
- ISSN 0099-9660. Retrieved May 7, 2025.
- ^ Vigliarolo, Brandon (September 19, 2022). "GPT-3 'prompt injection' attack causes bot bad manners". The Register. Retrieved February 9, 2023.
- ^ "What is a prompt injection attack?". IBM. March 26, 2024. Retrieved March 7, 2025.