Prompt Engineering
What it is
Prompt engineering is the art of crafting input text (prompts) to elicit specific, high-quality outputs from language models. Effective prompts include task description, context, examples, and formatting guidance. With large models like GPT-3/4, prompt engineering often outperforms fine-tuning for quick adaptation to new tasks.
[illustrate: Three progressively better prompts: basic, with instructions, with examples; corresponding model outputs improving in quality]
How it works
Effective prompts typically include:
-
Task description: Clearly state what you want
- “Summarize the following text in one sentence:”
- “Translate from English to French:”
-
Context/background: Provide necessary information
- Domain-specific context
- Constraints on output format
- Role definition (“You are a helpful assistant…”)
-
Examples (few-shot): Demonstrate input-output patterns
- 1–5 examples usually help significantly
- More examples for complex tasks
-
Output format: Guide structure
- “Output JSON: {"name": …, "age": …}”
- “Answer only with ‘yes’ or ’no’”
-
Chain of thought: For reasoning tasks, ask model to explain steps
- “Let’s think step-by-step…”
Example
# Weak prompt:
"Translate: dog"
# Better prompt:
"Translate the following words from English to French.
Examples:
cat → chat
house → maison
Translate: dog"
# Best prompt (chain-of-thought):
"Translate from English to French. Think about:
1. The word's part of speech
2. Common French equivalent
3. Gender (masculine/feminine) in French
Examples:
cat (noun, animal) → chat (masculine)
house (noun, building) → maison (feminine)
Now translate: dog"
Variants and history
Prompt engineering emerged with GPT-3 (2020) when researchers realized scale enables few-shot learning. Chain-of-thought prompting (Wei et al., 2022) improved reasoning. Self-consistency aggregates multiple reasoning paths. Prompt injection and adversarial attacks led to robustness research. Automatic prompt optimization uses models to improve prompts. Active area of research with no silver bullets—task-specific prompts often outperform general approaches.
When to use it
Use prompt engineering when:
- Working with large pre-trained models (GPT-3+)
- Labeled data for fine-tuning is unavailable
- Rapid task adaptation is needed
- Reasoning or multi-step tasks benefit from examples
- Models are instruction-tuned
Prompt engineering is skill-based, often requiring iteration. For simple tasks, basic prompts work; complex tasks benefit from examples and chain-of-thought. Sensitivity to wording is common.