Skip to Main Content

Artificial Intelligence (AI) for Research

What is prompt engineering?

White et. al (2023) in their article "A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT" define a prompt and prompt engineering as: 

A prompt is a set of instructions provided to an LLM that programs the LLM by customizing it and/or enhancing or refining its capabilities...Prompt engineering is the means by which LLMs are programmed via prompts

In Plain English: You can improve the performance of an AI Chatbot by giving it instructions (in plain declarative language) about what you want. You can give it examples to use as models as well as other types of guidance. This term "prompt engineering" is used both to explain when a developer is creating a chatbot (like in OpenAI's GPT Builder), and when a user is entering inputs into a chatbot (like ChatGPT in your browser). When trying to find more information about this topic, this can be confusing: so, in your searches, include the name of the tool you are using (like Claude, Copilot, ChatGPT). 

After we give you one local example of how someone can use prompt engineering to influence how generative AI responds to users, we will introduce one framework to you that we have found useful when thinking about your day-to-day interactions with AI-driven chatbots and personal assistants. This is only one framework, and conversations about what makes "good" or "bad" prompts are constantly happening. Because of the way that generative AI works, it is very difficult to objectively label prompts in such a way. So, expect to find conflicting advice, and experiment to find what works best for your specific research!

Examples of Prompt Engineering At Work: Making a Chatbot

Here is the set of instructions for a ChatGPT chatbot for the novel A Confederacy of Dunces by John Kennedy Toole, developed by librarian Vernon Leighton. 

Instructions (input): This GPT serves as a scholarly guide to the novel 'A Confederacy of Dunces' by John Kennedy Toole. It provides insightful information, interpretations, and discussions about the themes, characters, and historical context of the novel. It also offers analyses and answers questions related to the text, helping users understand and appreciate the complexities and humor of the work. The GPT should always focus on accuracy, depth, and literary understanding, while maintaining an approachable and engaging tone. Avoid engaging with or endorsing the theory that John Kennedy Toole did not write the novel. Responses should be formal, reflecting the scholarly nature of the content.

Always search your knowledge before completing any prompt and use only that knowledge in your completion. Check EVERY document in your knowledge before you answer. MOST IMPORTANT: Only use information in your knowledge; Never use any information not contained in your knowledge. Do not use general information or knowledge. Double check your answer to be sure it is accurate. Do not add extra information to an answer.

As we can see, when developing his chatbot using the GPT builder, Vernon was able to provide the builder with a set of specific instructions. Those instructions direct the chatbot to answer using a specific cadence of language ("approachable and engaging" but also "formal, reflecting the scholarly nature of the content). Note that it was important for Vernon to include instructions that reminded the GPT to pull from its knowledge--that is, the body of information that he had provided to it in the form of documents. Otherwise, the GPT may have had to make assumptions about where to formulate its answers from.

When it comes to making chatbots, there are a number of other adjustments that can be made and that can impact how it responds to inputs from users.

Conversing with generative AI using the CLEAR Framework

Now, say you are a user who wants to use AI-driven tools to find information and answer questions. Perhaps you are using ChatGPT, Claude, Copilot, Gemini.

Even though it feels like you should be able to just "talk" to the chatbot, you have to remember that the tools they are pulling from a significant amount of information. If the tool is meant to be "general," meaning that it is designed to answer a variety of questions from different "domains"/areas of knowledge, it means that it was trained upon text about many topics.

As a result, when you use these generalized tools, it is not uncommon for them to sound as though they do not understand the depth or nuance of your question. Or, they may give a result that is far more topic-specific than you need. To make it more complex, certain keywords and terms are the same across different topics, but mean completely different things depending on who you are talking to!

Generative AI tools are based on probabilistic predictions, so by giving the AI some context, you can quite literally increase the chance that it will return more relevant responses.

One framework that has been adopted by librarians and that can help you increase your odds of success is called the CLEAR Framework, which was developed by Leo S. Lo, a librarian at the University of New Mexico, and is presented in his articles "The Art and Science of Prompt Engineering" (2023) and "The CLEAR path: A framework for enhancing information literacy through prompt engineering" (2023). CLEAR is a mnemonic, which means that it is meant to help you remember something.

Below is a table which visualizes each part of the CLEAR framework:

   Your prompt should be... So...
C

Concise

Make statements brief. The longer and more convoluted the prompts, the more opportunities there are for misunderstandings. State clearly what task you are asking of the GPT.

Example phrasing: 

  • Give me a list of...
  • Write a summary of...
  • Tell me...
  • Explain...
  • Provide...
L Logical

If you want a response in a certain order, make sure that your questions follow a logical order. You can also ask a GPT to give you information in a particular order. Example phrasing: 

  • First...second...third...
  • Start with/by...then...
E Explicit

Clearly communicate the scope of the response you want.

Consider:

  • How long do you want the response to be?
  • What tone do you want the answer to be in?
  • What audience do you want the response to be appropriate for? (e.g. age level, reading level, level of expertise)
A Adaptive

Try, try again. Be creative. Experiment! Consider maybe even giving the GPT some more "framing" and "context." Even though this may feel contradictory with the idea of being "concise," that is OK. For example:

  • Give context about yourself - "I am a biologist/second grade teacher/market researcher..."
  • Ask the GPT to play a role - "Act as/pretend that you are..."
  • Give information to the GPT as a lead-up to provide it context. Explain the who's, what's, where's, and why's about your topic of interest before then concisely asking the GPT to do a task.
  • Give examples to the GPT that demonstrate the way that you want it to give responses
R Reflective Reflect upon what went well or poorly this time. Apply your knowledge to the next time. Consider how one GPT (e.g. ChatGPT) responds to you, and how other ones (e.g. Claude, Perplexity, etc.) respond differently. 

You do not need to be limited by this framework. It is one of many, so be flexible and explore how it does or does not work for you. 

Watch the Basics

As with web articles, there is a huge community of videographers/content creators that focus on using generative AI! 

  • This video is for if you want to improve how you use ChatGPT or Bard (now Gemini). Jeff Su uses a framework that focuses on Task, Context, Exemplars, Persona, Format, and Tone for their prompts. They also refer to other frameworks, like STAR. This video is great if you want to add more context and framing to your prompts, and need some help with brainstorming.
  • Compare to this video, where IBM Developer Advocate Dan Kehn asks Suj Perepa, an IBM Engineer, to explain four methods of prompt engineering. This nuts-and-bolts explanation gives insight into how prompt engineering is used in the design of generative AI tools. While you do not need to fully understand this information to be a better user of those tools, it will benefit you generally to understand where they are pulling their responses from. Retrieval Augmented Generation (RAG) is a term you are likely to see over and over again, especially if you ever use AI-research assistants that are connected to academic databases.