Prompt Engineering: Key Techniques for Better AI Responses
Written by Angelo Consorte | Published on November 17, 2024
What Is Prompt Engineering?
Prompt engineering, in the context of AI, is the process of designing and refining instructions (inputs) to obtain the most accurate, relevant, and useful responses (outputs).
Prompt engineering is like giving directions to a driver. If you provide clear, specific instructions, you are more likely to reach your destination quickly and accurately. However, if the directions are vague or unclear, the driver might get lost or take longer than needed. Similarly, well-crafted prompts guide AI to produce precise, useful responses, while poorly structured ones can lead to confusion or irrelevant results.
Why Is Prompt Engineering Relevant?
To understand the importance of prompt engineering, we first need to grasp how Large Language Models (LLMs) work. LLMs are trained using vast amounts of data from the internet. In simple terms, they work by predicting the next token (a unit of text, which could be a word or part of a word; for example, “chatbot” could be split into tokens: “chat” and “bot”). After being fine-tuned, they can respond to instructions, acting like an assistant to users.
For every prompt, the model selects from various tokens to generate a response. These variables are known as parameters, and typically, the model chooses the most probable next token based on the data it was trained on. These variables also measure the size of LLMs. For example, ChatGPT-4 has an estimated 1 trillion parameters, Llama 3.1 has 405 billion parameters, and Gemini 1.5 Pro is estimated also to have around 1 trillion parameters.
We are talking about vast amounts of data!
Additionally, LLMs are not search engines like Google or Bing, which match keywords in user queries to content online. They are reasoning engines because they can learn, think, and express their knowledge. Therefore, crafting clear prompts using the right techniques can significantly enhance the utility of these models for everyday tasks.
Best Practices
After discussing the relevance of prompt engineering, let’s dive into deploying LLMs using the best practices and techniques. Here is a list of the best practices to craft a good prompt:
1. Clarity and Context
- Grammar: The foundation of clarity is proper grammar and spelling. Poor grammar or misspelled words can lead to confusion, misinterpretation, or entirely different responses.
- Straightforwardness: Keep your prompt to the point. Avoid unnecessary filler words or complex sentence structures that may obscure your main request. For example, instead of saying, “Can you, like, tell me what happened during the event that occurred last week?”, you could simply say, “What happened during the event last week?”
- Context: A good prompt does not just ask for an answer, it provides the context needed to generate a meaningful response. For instance, if you’re asking an AI to help you write a social media post, you could say, “Write a Facebook post for a local coffee shop’s new seasonal drink, targeting young adults aged 18-25, highlighting the drink’s unique flavors and offering a limited-time discount.” The clearer and more detailed the context, the more relevant and useful the response will be.
- Specificity: The more specific you can be with your prompt, the better. Vague or general prompts can lead to responses that require a lot of back-and-forth to fine-tune. Specificity eliminates ambiguity.
2. Iterate and Refine
- Refining: The more complex the task, the more complex the prompt. Coming up with a new prompt for a complex task may result in a response that is not particularly useful. Refining and iterating are therefore very helpful practices.
- Implement Feedback: Your initial prompt doesn’t need to be perfect, but it should be a solid starting point. After receiving a response, carefully assess its quality. Does it answer the question fully? Is the information relevant? Based on feedback from the initial response, refine your prompt to be more specific.
- Expand Output: Finally, iteration can be used to fine-tune the outputs. After receiving a response, you might ask the AI to clarify certain points, expand on ideas, or rewrite sections in a different style.
3. Steps and Time
- Providing Steps: Providing clear steps can be helpful for two reasons: it gives the LLM a reliable guideline to follow, and it allows the model more time to process information without rushing to a conclusion.
- Time to Think: Instructing the model to take its time to come to a solution is a good practice, especially for more complex instructions, as it explicitly allows the model to go through a longer thought process.
4. Prompt Delimiters
- Signs and Symbols: Using prompt delimiters is another technique that helps us communicate with the model. By using symbols like “”, <>, \/, ||, we can indicate to the model which part of the text we are referring to.
- Communication: Many script formats use delimiters to communicate with machines. Even though LLMs easily understand natural language, using delimiters can still be helpful.
5. One- and Few-shot Learning
- One-shot learning: Involves providing a single example along with your prompt. This helps the AI understand the format or approach you expect for the response.
- Prompt: “Example: ‘Comfortable, high-quality backpack made with durable materials.’ Now, write a product description for running shoes.”
- AI output: “These running shoes offer comfort and speed, featuring lightweight material and excellent support.”
- Few-shot learning: Similar to one-shot learning, but with several examples—usually 2 to 5—so the AI can recognize patterns or the style you want in the output.
- Prompt: “Here are some examples of catchy social media captions: ‘Chase your dreams, not just your coffee.’ ‘Sundays are for relaxation and good vibes.’ Now, create a caption for a fitness brand.”
- AI Output: “Push yourself to new limits—your future self will thank you!”
6. Negative Prompting
- Negative Prompts: Involve explicitly telling the AI what to avoid in its response. By specifying what you don’t want, you can guide the AI to steer clear of certain types of content, tone, or style.
- Prompt: “Write a blog post about the benefits of exercise. Avoid mentioning weight loss or dieting.”
- AI output: “Exercise boosts energy, improves heart health, and enhances mental well-being.”
Conclusion
In conclusion, prompt engineering is a crucial skill when working with AI models, as it directly impacts the quality and relevance of the responses generated through simple practices and techniques.
By focusing on clarity, specificity, and iteration, you can significantly improve the effectiveness of your prompts, whether you’re crafting content, solving problems, or brainstorming ideas.
Furthermore, understanding each model specifically and considering ethical considerations ensures that AI-generated outputs are both accurate and responsible. As LLMs continue to improve, prompt engineering skills will help you leverage LLMs more effectively and ethically.
References
- Open AI. (n.d.). What are tokens and how to count them? | openai help center. Open AI Help Center. https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them
- McKinsey & Company. (2024, March 22). What is prompt engineering?. McKinsey & Company. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-prompt-engineering
- Illustrating reinforcement learning from human feedback (RLHF). Hugging Face – The AI community building the future. (2022, December 9). https://huggingface.co/blog/rlhf#rlhf-lets-take-it-step-by-step
- NG, A (2024, November 15) ChatGPT Prompt Engineering for Developers (Video of a course). DeepLearning.AI. https://learn.deeplearning.ai/courses/chatgpt-prompt-eng/lesson/2/guidelines