There are an endless number of web articles written on writing effective chat prompts. These are only a few to start with. They essentially summarize what you will find elsewhere.
If you are interested in how probability is involved with the development of LLMs, as well as whether or not outputs are reproducible, below are some places to start reading.
White et. al (2023) in their article "A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT" define a prompt and prompt engineering as:
A prompt is a set of instructions provided to an LLM that programs the LLM by customizing it and/or enhancing or refining its capabilities...Prompt engineering is the means by which LLMs are programmed via prompts
In Plain English: You can improve the performance of an AI Chatbot by giving it instructions (in plain declarative language) about what you want. You can give it examples to use as models as well as other types of guidance. This term "prompt engineering" is used both to explain when a developer is creating a chatbot (like in OpenAI's GPT Builder), and when a user is entering inputs into a chatbot (like ChatGPT in your browser). When trying to find more information about this topic, this can be confusing: so, in your searches, include the name of the tool you are using (like Claude, Copilot, ChatGPT).
After we give you one local example of how someone can use prompt engineering to influence how generative AI responds to users, we will introduce one framework to you that we have found useful when thinking about your day-to-day interactions with AI-driven chatbots and personal assistants. This is only one framework, and conversations about what makes "good" or "bad" prompts are constantly happening. Because of the way that generative AI works, it is very difficult to objectively label prompts in such a way. So, expect to find conflicting advice, and experiment to find what works best for your specific research!
Here is the set of instructions for a ChatGPT chatbot for the novel A Confederacy of Dunces by John Kennedy Toole, developed by librarian Vernon Leighton.
Instructions (input): This GPT serves as a scholarly guide to the novel 'A Confederacy of Dunces' by John Kennedy Toole. It provides insightful information, interpretations, and discussions about the themes, characters, and historical context of the novel. It also offers analyses and answers questions related to the text, helping users understand and appreciate the complexities and humor of the work. The GPT should always focus on accuracy, depth, and literary understanding, while maintaining an approachable and engaging tone. Avoid engaging with or endorsing the theory that John Kennedy Toole did not write the novel. Responses should be formal, reflecting the scholarly nature of the content.
Always search your knowledge before completing any prompt and use only that knowledge in your completion. Check EVERY document in your knowledge before you answer. MOST IMPORTANT: Only use information in your knowledge; Never use any information not contained in your knowledge. Do not use general information or knowledge. Double check your answer to be sure it is accurate. Do not add extra information to an answer.
As we can see, when developing his chatbot using the GPT builder, Vernon was able to provide the builder with a set of specific instructions. Those instructions direct the chatbot to answer using a specific cadence of language ("approachable and engaging" but also "formal, reflecting the scholarly nature of the content). Note that it was important for Vernon to include instructions that reminded the GPT to pull from its knowledge--that is, the body of information that he had provided to it in the form of documents. Otherwise, the GPT may have had to make assumptions about where to formulate its answers from.
When it comes to making chatbots, there are a number of other adjustments that can be made and that can impact how it responds to inputs from users.
Now, say you are a user who wants to use AI-driven tools to find information and answer questions. Perhaps you are using ChatGPT, Claude, Copilot, Gemini.
Even though it feels like you should be able to just "talk" to the chatbot, you have to remember that the tools they are pulling from a significant amount of information. If the tool is meant to be "general," meaning that it is designed to answer a variety of questions from different "domains"/areas of knowledge, it means that it was trained upon text about many topics.
As a result, when you use these generalized tools, it is not uncommon for them to sound as though they do not understand the depth or nuance of your question. Or, they may give a result that is far more topic-specific than you need. To make it more complex, certain keywords and terms are the same across different topics, but mean completely different things depending on who you are talking to!
Generative AI tools are based on probabilistic predictions, so by giving the AI some context, you can quite literally increase the chance that it will return more relevant responses.
One framework that has been adopted by librarians and that can help you increase your odds of success is called the CLEAR Framework, which was developed by Leo S. Lo, a librarian at the University of New Mexico, and is presented in his articles "The Art and Science of Prompt Engineering" (2023) and "The CLEAR path: A framework for enhancing information literacy through prompt engineering" (2023). CLEAR is a mnemonic, which means that it is meant to help you remember something.
Below is a table which visualizes each part of the CLEAR framework:
Your prompt should be... | So... | |
---|---|---|
C |
Concise |
Make statements brief. The longer and more convoluted the prompts, the more opportunities there are for misunderstandings. State clearly what task you are asking of the GPT. Example phrasing:
|
L | Logical |
If you want a response in a certain order, make sure that your questions follow a logical order. You can also ask a GPT to give you information in a particular order. Example phrasing:
|
E | Explicit |
Clearly communicate the scope of the response you want. Consider:
|
A | Adaptive |
Try, try again. Be creative. Experiment! Consider maybe even giving the GPT some more "framing" and "context." Even though this may feel contradictory with the idea of being "concise," that is OK. For example:
|
R | Reflective | Reflect upon what went well or poorly this time. Apply your knowledge to the next time. Consider how one GPT (e.g. ChatGPT) responds to you, and how other ones (e.g. Claude, Perplexity, etc.) respond differently. |
You do not need to be limited by this framework. It is one of many, so be flexible and explore how it does or does not work for you.
As with web articles, there is a huge community of videographers/content creators that focus on using generative AI!