AirOps Academy
Workflow Builder

LLM Prompts 101

Lesson Overview

In this video, you will learn the fundamentals of prompting large language models (LLMs) using AirOps. The lesson covers key concepts such as model selection, temperature settings, and techniques for guiding the model's output, providing a solid foundation for working with LLMs effectively.

  • 0:00: Introduction to the LLM step in your workflow
  • 0:31: Anatomy of a call to an LLM
  • 3:13: Using the system prompt to provide context
  • 3:59: Guiding the model's response with user-assistant pairs
  • 5:35: Generating multiple outputs from a single LLM call using JSON mode

Key Concepts

Model Selection and Settings

AirOps provides access to a range of best-in-class models for content creation and everyday use cases, including GPT-4, GPT-4 Mini, and the Claude series from Anthropic. When selecting a model, consider factors such as model size, speed, and suitability for your specific task. Additionally, adjust the temperature setting to control the model's creativity and consistency.

System Prompt and User-Assistant Pairs

The system prompt allows you to set the model's objective and provide background information. Use it to include context about the business, topic, or desired output format. User-assistant pairs enable you to teach the model how to respond in a specific way by providing mock conversations. This technique is useful for matching tone of voice, content length, or specific structures.

Generating Multiple Outputs with JSON Mode

To obtain multiple pieces of content or items from a single LLM prompt, use JSON mode. Instruct the model to return the output in a JSON object with a specified structure. This ensures consistent output that can be easily processed downstream in your workflow. You can then use a text step or JSON step to extract and manipulate the desired keys from the JSON object.

Key Takeaways

  1. Experiment with different models, temperature settings, and prompts to optimize your LLM workflow for your specific use case.
    1. "Over time, as models have got more capable and context, windows have got bigger. There's just more room for optimization and things to try and experiment with here."
  2. Utilize the system prompt to provide the model with background information and set its objective.
    1. "We really recommend using that to give a lot of background to the model a lot of rich descriptive content and key facts that you want it to be aware of before it goes and starts its task."
  3. Guide the model's response using user-assistant pairs to ensure consistent output in the desired format.
  4. Generate multiple outputs from a single LLM call by instructing the model to return a JSON object with a specified structure.
    1. "This is a really great way to be able to consistently output multiple things in a single LLM core, which is a very, very common use case."
  5. Scale your LLM workflow by creating an AirOps grid, which allows you to run the workflow multiple times and view the individual outputs as columns.

Workflow Builder

Now that you understand Grids, it's time to create your own precise workflows that include data, AI calls and human review.

Search

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
No results found