The difference between a mediocre AI response and a brilliant one often has nothing to do with the AI itself — it has everything to do with how you asked.
Prompt engineering is the practice of crafting inputs to AI language models that reliably produce high-quality, useful outputs. It's part communication skill, part systems thinking, and part structured experimentation.
The good news: you don't need to be an engineer to become good at it.
LLMs are remarkably capable but also remarkably literal. They don't know what you meant to ask — they only know what you actually asked.
A prompt is essentially a context-setting exercise. You're not just asking a question; you're shaping:
Master this, and the same underlying model will feel several generations more powerful.
The simplest form of prompting is zero-shot: you describe your task and ask the model to do it, without any examples.
Before:
Summarise this article.
[article text]
After:
Summarise the following article in 3 bullet points. Each bullet should be
one sentence. Focus on the main argument, the key evidence, and the
conclusion. Avoid jargon.
[article text]
The "after" version specifies format (3 bullets), length (one sentence each), priorities (argument, evidence, conclusion), and style (no jargon). The model has a much clearer target.
Common zero-shot mistakes:
Research from OpenAI and others has shown that simply adding "Let's think step by step" to a reasoning prompt can increase accuracy on math and logic problems by 40–50%. This is chain-of-thought prompting at its simplest — and it works because it encourages the model to show its work rather than jump to an answer.
Sometimes telling the model isn't as effective as showing it. Few-shot prompting provides 2–5 examples of the input-output pattern you want before presenting your actual task.
Example — Sentiment classification:
Classify the sentiment of each customer review as Positive, Negative, or Neutral.
Review: "The delivery was fast but the product was broken."
Sentiment: Negative
Review: "Absolutely love this! Will buy again."
Sentiment: Positive
Review: "It arrived on time."
Sentiment: Neutral
Review: "The quality exceeded my expectations but the packaging was wasteful."
Sentiment: [complete this]
The model has now seen what format you want, how to handle mixed signals, and what granularity of analysis is expected. Without those examples, you might get a paragraph instead of a single word.
When to use few-shot:
Chain-of-thought (CoT) prompting asks the model to reason through a problem step-by-step before giving its final answer. This dramatically improves performance on reasoning, maths, and multi-step tasks.
Without CoT:
A shop sells apples for £0.50 each. If I buy 7 apples and pay with a £5 note,
how much change do I get?
Answer: £1.50
(Sometimes correct, sometimes not — and you can't see the working)
With CoT:
A shop sells apples for £0.50 each. If I buy 7 apples and pay with a £5 note,
how much change do I get? Work through this step by step.
Step 1: Cost of 7 apples = 7 × £0.50 = £3.50
Step 2: Change = £5.00 - £3.50 = £1.50
Answer: £1.50
The visible reasoning lets you spot errors, and the act of writing the steps actually makes the model less likely to make them.
For very complex problems, you can push this further with "zero-shot CoT" — just add: "Think step by step, showing your reasoning clearly before giving your final answer."
One of the most powerful techniques is giving the model a specific role or persona. This activates relevant knowledge, sets the appropriate tone, and often improves accuracy in specialised domains.
Without role:
Explain photosynthesis.
With role:
You are a science teacher explaining photosynthesis to a 12-year-old who
loves video games. Use gaming analogies where helpful. Be enthusiastic
and encouraging.
The second prompt will produce a fundamentally different and (for that audience) far more useful explanation.
Role prompting works especially well for:
A complete system prompt pattern:
You are [role] with expertise in [domain].
Your audience is [description of user].
Your goal is to [specific objective].
Always [constraint 1].
Never [constraint 2].
Format your response as [format specification].
For complex, multi-stage tasks, a single prompt often isn't enough. Prompt chaining breaks the task into steps, using the output of one prompt as input to the next.
Example — Writing a blog post:
Instead of one monster prompt, chain:
This produces significantly better results than asking for a finished blog post in one go — each step gets the model's full attention on a focused task.
Prompt chaining mirrors how human experts work. A good author doesn't just start writing — they brainstorm, outline, draft, and revise. A good engineer doesn't code from scratch — they design, prototype, test, refactor. Why would we expect AI to skip these steps and produce masterwork on the first attempt?
❌ "Write a comprehensive guide to machine learning including history, techniques, applications, tools, career paths, and future trends."
✅ Break it into focused prompts, one topic at a time.
❌ "Don't be too technical. Don't write too much. Don't include jargon."
✅ "Write at a Year 9 level, in 200 words, using everyday language." — tell it what TO do.
❌ "List the pros and cons of electric vehicles."
✅ "Create a table comparing electric vs petrol vehicles. Use columns: Factor, Electric, Petrol. Include 6 rows covering cost, environment, range, charging, maintenance, and resale value."
❌ "Rewrite this to be better."
✅ "This paragraph is from a technical report aimed at non-technical executives. Rewrite it to remove jargon, reduce length by 30%, and add a one-sentence summary at the start."
The best prompt engineers iterate. If the first response isn't quite right, tell the model exactly what to change: "The tone is too formal — make it more conversational. And move the statistics to a sidebar rather than the main text."
For programmatic use (APIs, workflows, pipelines), you often need structured output. Simply ask for it explicitly:
Analyse the following job description and return a JSON object with this
exact structure:
{
"required_skills": ["skill1", "skill2"],
"nice_to_have_skills": ["skill3"],
"seniority_level": "junior|mid|senior",
"remote_friendly": true|false,
"key_responsibilities": ["resp1", "resp2", "resp3"]
}
Job description:
[text here]
Modern LLMs are remarkably good at this. Many APIs now offer a "JSON mode" that enforces valid JSON output, making this even more reliable.
Prompt engineering is a skill that compounds. Start with these habits:
The models are improving rapidly, but so is the ceiling of what a skilled prompt engineer can extract from them. The fundamentals here — clarity, specificity, context, iteration — will remain valuable regardless of which model you're using.