
5 Prompts' Best Practices
23-10-19
3 Questions You Should Ask Yourself When Using LLMs

When embarking on the process of using a Large Language Model (LLM) such as ChatGPT, it's essential to ask specific questions to ensure a smooth and effective utilization of the technology.
However, before you start the discussion, here are three main questions you should consider:
What Problem Am I Trying to Solve?
Define the specific problem or task you want the LLM to assist with. Whether it's generating creative content, automating customer support, or analyzing large datasets, having a clear understanding of your objective is fundamental. This clarity will guide the configuration and application of the LLM to best suit your needs.
How Can I Safeguard Ethical and Responsible Use?
Consider the ethical implications of using the LLM. Understand the potential biases in the data it was trained on and how these biases might affect the responses generated. Establish guidelines and review processes to ensure that the AI-generated content aligns with your ethical standards. Additionally, think about privacy concerns and data security, especially if the LLM will be dealing with sensitive information.
What Data and Context Does the LLM Need?
LLMs like GPT rely heavily on the data they were trained on and the context provided during interactions. Understand the type of input data the model requires to generate accurate and relevant responses. Consider the format, quality, and quantity of data needed to achieve the desired outcomes. Also, think about the context you provide—clear and concise instructions can significantly influence the accuracy and relevance of the LLM's responses.
By addressing these questions, you can establish a solid foundation for your LLM implementation.
Now that you are all set, here are the top 5 prompt best practices to use in order to get the best LLM experience.
Be Clear and Specific:
Provide clear and concise instructions. Clearly specify the format you want the answer in, any constraints, and the context of the task. Ambiguity can lead to vague or unexpected responses.
Use System and User Messages Effectively:
Utilize the system message to gently instruct the model. However, important instructions are often better placed in a user message. Important details are often better placed in a user message.
Experiment with Temperature and Max Tokens:
Adjust the "temperature" parameter. Higher values (e.g., 0.8) make the output more random, while lower values (e.g., 0.2) make it more focused and deterministic. Additionally, set an appropriate "max tokens" value to limit the response length, especially if you're working within a character or word limit.
Iterative Refinement:
If the initial response is not what you desired, you can iterate. You can take the model's output, add more context, and ask it to elaborate further. This iterative approach often leads to more accurate and refined responses.
Experiment with Prompt Engineering Techniques:
Experiment with techniques like framing your request as a dialogue, asking the model to think step-by-step or debate pros and cons before settling on an answer. These approaches can guide the model to provide well-thought-out responses.
Remember that the effectiveness of these tricks can vary based on the specific task you're performing. It's often a good practice to experiment with different approaches and iterate based on the model's responses to find what works best for your use case.
Latest News