
The Beginner’s Guide to Prompt Engineering: Part 1 – Understanding Prompt Structure and Terminology
This is part 1 of a two-part series of blogs explaining Prompt Engineering. In this first instalment, I want to focus on the basic structure of prompts and provide a glossary of terms that people use when talking about prompts.
Introduction
We recommend that you write your prompts in Markdown, this is a very easy to understand formatting that is recognised by the LLM and makes your prompts consistent in both look and feel. If you are not familiar with Markdown then I have written a short guide to the main elements that you might need to use, click on this link to read more about Markdown.
Prompt Engineering is the art of crafting effective prompts that elicit the desired response from language models like OpenAI’s ChatGPT, Anthropic’s Claude, or Google’s Gemini, to name just a few. It’s a crucial skill for anyone looking to get the most out of these powerful AI assistants.
Whether you’re a writer, marketer, developer, or just someone looking to harness the power of AI, understanding the fundamentals of prompt engineering can unlock a new world of possibilities. In this blog post, we’ll dive deep into the core components of a prompt and demystify the key terminology you need to know.
The Anatomy of a Prompt

At its most basic level, a prompt is simply a piece of text that is input into a language model to generate a response. However, there’s a lot more that goes into crafting an effective prompt than meets the eye. Let’s break down the key elements:
Context
The context sets the stage for your prompt. It provides the language model with the necessary background information to understand the task at hand and formulate an appropriate response. This could include details about the subject matter, the intended audience, the desired tone, or any other relevant contextual factors.
For example, a prompt for writing a blog post about prompt engineering might start with something like:
“The Assistant is a prompt engineering expert tasked with writing a blog post that introduces the basics of prompt engineering to a general audience. The blog post should be informative, engaging, and approximately 1,000 words long.”
Traits
Traits are specific instructions or attributes that you want the language model to embody or emulate in its response. These could include things like:
– Tone (e.g., formal, casual, empathetic)
– Writing style (e.g., concise, verbose, conversational)
– Personality (e.g., authoritative, quirky, sympathetic)
– Knowledge level (e.g., expert, beginner, intermediate)
For example, you might instruct the model to:
“Write the blog post in a friendly, conversational tone that is easy for a beginner to understand.”
Assistant
It is good practice to refer to the AI as Assistant, this way you keep to a consistent naming convention throughout your prompt and the prompt itself is much clearer and concise.
Human
It is good practice to refer to the user as Human, as with the Assistant, this way you keep to a consistent naming convention throughout your prompt and the prompt itself is much clearer and concise.
Task
The task is the specific action or objective that you want the language model to perform. This is the core of your prompt and should be clear, concise, and unambiguous.
For example, a task might be:
“Write a 1,000-word blog post that introduces the basics of prompt engineering to a general audience.”
Output
The is the desired result of the prompt. This could be a written text, a set of bullet points, a series of questions, or any other format that fits the task at hand.
Continuing the example, the output for a blog post prompt might be:
The output will use Markdown and be formatted as the following:
1. Blog post title
2. Blog post introduction (200-300 words)
3. Blog post body (700-800 words)
4. Blog post conclusion (100-150 words)
Prompt Engineering Terminology Glossary

Now that we’ve covered the core components of a prompt, let’s dive into some of the common terms and concepts used in the world of prompt engineering:
– Prompt Tuning: The process of iterative refining and optimising a prompt to improve the quality and relevance of the language model’s response.
– Prompt Injection: Embedding specific instructions or information within the prompt itself, often using special formatting or syntax.
– Prompt Chaining: Combining multiple prompts together to achieve a more complex or multi-step task.
– Prompt Template: A pre-designed prompt structure that can be customised and reused for different scenarios.
– Prompt Encoding: The way in which a prompt is represented and encoded for input into the language model.
– Prompt Hacking: The practice of finding creative or unconventional ways to manipulate prompts to achieve unexpected or desired results.
– Prompt Engineering: The overall discipline of designing and refining prompts to get the best possible outputs from language models.
– Prompt Pollution: The inclusion of unnecessary or irrelevant information in a prompt, which can negatively impact the language model’s response.
– Prompt Hallucination: When a language model generates content that is not accurately grounded in the prompt or the provided information.
– Prompt Factuality: The extent to which a language model’s response is based on factual information rather than speculation or imagination.
By understanding these key terms and concepts, you’ll be well on your way to becoming a proficient prompt engineer, capable of harnessing the power of language models to achieve your goals.
Conclusion
In this introductory guide, we’ve covered the fundamental components of a prompt and explored the key terminology used in the world of prompt engineering. Remember, effective prompt engineering is all about understanding the language model you’re working with, setting clear expectations, and iteratively refining your prompts to get the best possible results.
Stay tuned for the second part of this series, where we’ll dive deeper into advanced prompt engineering techniques and strategies to take your AI-assisted workflows to the next level.
Before I go – A tip when you are prompting.
A useful habit to adopt is that of putting a line in the prompt to ask the AI to produce 3 versions of the output. This will give you a much broader spectrum of output from the Generative AI and will give you a better feel for your overall engineered prompt.
See you in part 2.

