Prompts
The Prompts page in the admin interface is dedicated to creating, managing, and configuring the prompts used by AI Agents. These prompts are fundamental to how Large Language Models (LLMs) generate responses, classify user input, and perform other AI-driven tasks.
Prerequisites
Before you can manage Prompts, ensure the following conditions are met:
- You must be logged into the application.
- Your user role must have the necessary permissions (
owner
,admin
,editor
, orprompts
). - You must have an Account selected from the sidebar.
- You must have an Agent selected from the sidebar. Prompts are managed on a per-agent basis.
Permissions
Access to the Prompts page and its functionalities is generally controlled by user permissions. Users with owner
, admin
, editor
, or prompts
permissions can typically manage prompts.
Page Overview
The Prompts page is divided into three main sections:
- Add Prompt: An expandable form for creating new prompt configurations.
- Prompts List: A list of existing prompts associated with the selected agent, with filtering options.
- Assign Prompt: A tool to associate an existing prompt (from your account) with the current agent.
Creating a New Prompt
To create a new prompt:
- Navigate to the Prompts page.
- Ensure an account is selected, as prompts are associated with an account.
- Expand the "New Prompt" section.
- A form will appear with the following fields:
Helper Buttons
- Add Variable: Opens a dialog to create a new variable that can be used within your prompt configurations.
- Lookup Names: Opens a dialog to look up the exact names of other entities like Tools and Variables available for the selected agent.
Configuration Fields
Here is a breakdown of the fields in the "New Prompt" form:
- Prompt Name: A unique, descriptive name for the prompt.
- Type: Defines the prompt's purpose.
classification
: To determine user intent.subprompt
: For recursive refinement and setting variables during a conversation.final prompt
: The last prompt before the agent gives a final response.sentiment
: For analyzing customer feedback.agent assist
: To provide suggestions to a live agent.summary
: For generating summaries of live agent conversations.- Status: A toggle to enable (
True
) or disable (False
) this prompt. - Agent(s): A multi-select dropdown to associate this prompt with one or more agents.
- Model Provider: The LLM provider (e.g., Google, OpenAI).
- Prompt Model: The specific LLM model from the chosen provider.
- Response Variable: The variable where the main response from the LLM will be stored.
- Response Candidate Variable: The variable where alternative response candidates from the LLM will be stored.
- Response MIME Type: The expected MIME type of the LLM's response (e.g.,
text/plain
,application/json
). - Body: The main content of the prompt. You can embed variables using the format
{var["variableName"]}
. User input is referenced directly as{user_input}
. - File URI Variable: A variable that holds the URL of a file (image, PDF, video, etc.) to be included with the prompt.
- File MIME Type: The MIME type of the file specified by the "File URI Variable".
- Temperature: (Slider 0.0-2.0) Controls the creativity of the response. Lower values are more deterministic.
- Top P: (Slider 0.0-1.0) Controls nucleus sampling.
- Top K: (Slider 1-40) Considers the next word from a set of the most likely words.
- Max Tokens: The maximum number of tokens the LLM should generate (e.g., max 8192 for some Gemini models).
- Thinking Budget: (For supported models) Tokens to use for the "thinking" process. A value of
0
turns off thinking. - Candidates: The number of response candidates to generate.
- Tool(s): Optional. Select one or more tools for the LLM to use. These can be pre-built tools (like Google Search) or your own custom functions. See the "Using Tools with Prompts" section below for details.
- Response Schema: Optional. A JSON schema for the expected response structure.
- Safety Settings: A JSON list of safety settings for the LLM.
- Stop Sequences: A comma-separated list of sequences that, if generated, will cause the LLM to stop.
- Seed: A random seed for reproducible generation.
- Frequency Penalty: (Slider 0.0-1.0) Penalizes new tokens based on their existing frequency.
- Presence Penalty: (Slider 0.0-1.0) Penalizes new tokens based on whether they have already appeared.
Once all fields are configured, click Create Prompt.
Listing Prompts
Existing prompts for the currently selected account and agent are displayed in a list:
- Filter by Prompt Type: A dropdown allows filtering prompts by their type (e.g., "classification", "subprompt").
- Each prompt is shown in an expandable section.
- The display includes the prompt name, type, status (active/inactive), and associated agents.
- Expanding a prompt's section reveals detailed information:
- Provider and Model
- Body
- Tools and Safety Settings
- Response Variables and Thinking Budget
- Temperature, Top P, and Top K
- Max Tokens and MIME Types
- File URI and MIME Type
- Stop Sequences and Seed
- Each prompt entry has "Edit" and "Delete" buttons.
Editing an Existing Prompt
To modify an existing prompt:
- Click the "Edit" button next to the desired prompt in the list.
- The "Edit Prompt" form will appear, pre-filled with the prompt's current information.
- Most fields available during prompt creation can be modified.
- Buttons available within the edit form:
- Save Changes: Saves the modifications to the database.
- Cancel: Discards changes and closes the edit form.
Deleting a Prompt
To delete a prompt:
- Click the "Delete" button next to the desired prompt in the list.
- The system will attempt to remove the prompt record from the database.
- A success or error message will be displayed.
Assigning a Prompt to an Agent
If an agent is selected, an additional form appears:
- "Existing Prompts available for [Selected Agent Name] Agent:": A dropdown lists prompts from the current account that are not yet associated with the selected agent.
- Clicking "Include Prompt" will add the selected agent to the
prompt_agents
array of the chosen prompt, effectively linking them.
Required Classification Prompt
Every User Input turn begins with a required 'classification' prompt that will determine what type of Intent a user is performing. This classification prompt can be somewhat generic, but (pro tip) if the llm model is having difficults with a specific type of input and is mischaracterizing it, you can modify this classification prompt to call out and correct the misinterpretation.
The following is an core classification prompt example:
- Prompt Name:
Classification
- Type:
classification
- Provider:
google
- Model:
gemini-2.0-flash
- Response Variable:
intent_resp
- Body:
Classify user input into the closest specific intent from the following complete list of intents: {var["intentNameList"]}. Remember that short user inputs are typically data field entries that should be classified using the data input intent list: {var["inputIntentNameList"]}. If user input is global_quote, this is a stock research type. If the user input is one of the following it is a state: AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY The user input is: {user_input}. Only return the name of the specific intent, nothing else.
Prompt Configuration Examples
Prompts are instructions given to a Large Language Model (LLM) to generate a response. They can be simple questions or complex instructions involving variables, tools, and specific output formats.
Variable Usage in Prompts
- General Variables: Use
{var["<variable-name>"]}
to reference any created variable. - Example:
Hello {var["first_name"]}, how can I help you?
- User Input: Use
{user_input}
to directly insert the user's latest message. - Special System Variables: The system provides several read-only variables with context about the conversation:
{user_input}
: The user's direct input.{conversation_text}
: The conversation history. Used insummary
andagent assist
prompt types.{var["intentNameList"]}
: List of all intent names.{var["intent_resp"]}
: The current classified intent.{var["privacy_resp"]}
: The current privacy classification.{var["inputIntentNameList"]}
: List of intent names that require user input.{var["noInputIntentNameList"]}
: List of intent names that do not require user input.{var["intentPrivacyPairsList"]}
: List of (intent name, privacy level) pairs.{var["intentTypePairsList"]}
: List of (intent name, type) pairs.{var["intentActionPairsList"]}
: List of (intent name, action key) pairs.{var["intentContainsPairsList"]}
: List of (intent name, associated keywords) pairs.
Note: Global variables are not listed as they are immutable and not intended for dynamic use in agent prompts.
Prompt Examples
Example 1: Simple Final Prompt
A straightforward prompt to generate a conversational response using Anthropic's Claude Sonnet model.
- Prompt Name:
question final - final prompt
- Type:
final prompt
- Provider:
anthropic
- Model:
claude-3-sonnet-20240229
- Response Variable:
final_resp
- Body:
Example 2: Prompt with RAG Store
This prompt uses a custom RAG (Retrieval-Augmented Generation) store to ground the model's response in specific technical information.
- Prompt Name:
tech support final - final prompt
- Type:
final prompt
- Provider:
google
- Model:
gemini-1.5-flash-latest
- Tools:
pinionai rag store
- Response Variable:
final_resp
- Body:
Answer the {var["intent_resp"]} in a polite but informal manner. Use the retrieval tool to focus on pinionai technical information. The question is: {user_input}
Example 3: Complex Analysis with Grounding
A detailed prompt for stock analysis that uses the Google Search grounding tool to ensure the response is based on up-to-date information.
- Prompt Name:
stock analysis - final prompt
- Type:
final prompt
- Provider:
google
- Model:
gemini-2.0-flash
- Tools:
google grounding
- Response Variable:
stock_analysis_response
- Body:
Determine if the stock might be a good investment based on the analysis. Analyze the investment potential of the stock with the ticker symbol {var["stock_symbol"]}. Structure the analysis to help evaluate if this stock could be a suitable investment, covering:
Business Fundamentals, Financial Performance, Valuation, Industry and Competitive Landscape, Growth Prospects and Future Outlook, Risks and Challenges, Qualitative Factors, Recent Stock Performance and Overall Investment Thesis (Potential).
Example 4: Function Calling
This prompt instructs the model to call a custom function (get_stock_data
) to retrieve live data before generating a response. (See Custom Functions section below.)
- Prompt Name:
stock price - final prompt
- Type:
final prompt
- Provider:
google
- Model:
gemini-2.0-flash
- Tools:
get stock data
- Response Variable:
stock_analysis_response
- Body:
Retrieve stock information, and format it in markdown to make it easy to read. User question: {user_input}
To get the data, call the get_stock_data function. The stock ticker to gather information on is: {var["stock_symbol"]} . The function to perform for the lookup is: {var["stock_lookup_function"]} and the API key to use is: {var["alphavantage_key"]}
Example 5: Prompt for an External MCP Server
This prompt's body is just a variable. It's designed to be used by an external MCP (Multi-Context Prompt) server which will handle the complex logic, potentially using this variable as input for its own internal prompt.
- Prompt Name:
domain search - final prompt
- Type:
final prompt
- Provider:
google
- Model: (Not specified, as processing is external)
- Response Variable:
domain_response
- Body:
Using Tools with Prompts
Tools give your AI Agent the ability to interact with the outside world. When you associate a Tool with a Prompt, you are telling the Large Language Model (LLM) that it has a new capability it can use to answer the user's request. The LLM is smart enough to decide when to use a tool and what inputs to provide to it based on the user's prompt and the descriptions you provide for the tool.
There are two main types of tools you can configure on the Tools page and then use in your prompts:
1. Pre-built Tools
These are ready-made tools provided by the AI model provider (e.g., Google). Examples include:
- Google Search: Allows the model to search Google to find up-to-date information.
- Code Execution: Allows the model to run code to perform calculations.
You simply select a pre-built tool on the Tools page and associate it with your prompt.
2. Custom Functions
This is a powerful feature that allows you to make your agent call your own Python code. This is essential for integrating with your own business logic, databases, or private APIs.
The process works like this:
- Write a Python Function: You add a new
async
Python function to thepinionai_extensions.py
file in the project's root directory (/pinionai_extensions.py). This file is specially designed to hold your custom agent functions. Thepinionai
library file automatically imports these functions into its global scope. To use your custom extensions, you will need to create a tool and include it in a prompt.
# Stock Market Function in /pinionai_extensions.py
async def get_stock_data(
stock_lookup_function: str | None = None,
stock_symbol: str | None = None,
alphavantage_key: str | None = None,
) -> dict:
"""
Fetches stock data from the Alpha Vantage API.
"""
# Filter out None values from parameters
params = {
"function": stock_lookup_function,
"symbol": stock_symbol,
"apikey": alphavantage_key,
}
params = {k: v for k, v in params.items() if v is not None}
try:
async with httpx.AsyncClient() as client:
base_url = 'https://www.alphavantage.co/query'
response = await client.get(base_url, params=params, headers={"User-Agent": "none"})
logging.debug(f"Stock check Response URL: {response.url}")
response.raise_for_status()
# convert to markdown
stock_data = response.json()
return await format_stock_data_as_markdown(stock_data)
except httpx.HTTPStatusError as http_err:
logging.error(f"HTTP error occurred: {http_err} - {http_err.response.text}")
return {"error": f"HTTP error: {http_err.response.status_code}", "message": http_err.response.text}
except Exception as e:
logging.error(f"An unexpected error occurred: {e}")
return {"error": "An unexpected error occurred.", "message": str(e)}
-
Declare the Function to the LLM: On the Tools page, you create a new Tool and add a "Functional Declaration" for your Python function. This is a JSON object that describes your function to the LLM. The
name
must exactly match your Python function name. -
Associate with a Prompt: On the Prompts page, you edit a prompt and select your newly created Tool in the "Tool(s)" field.
Now, when this prompt is used, the LLM will know about the get_stock_data
tool. If the user asks "What's the stock price for GOOGL?", the LLM will understand that it should call your get_stock_data
function with stock_symbol="GOOGL"
.
Passing Dynamic Data with var
You can make your tools even more powerful by using agent variables. In the description
fields of your Functional Declaration, you can include placeholders like {var["variable_name"]}
. These are replaced with the current values from the agent's session before being sent to the LLM. This gives the LLM context about what values it should use for the function's parameters.
Example:
Please note: with
{var['variable']}
, you can easily pass values to functions inpinionai_extensions.py
.
{
"name": "get_stock_data",
"description": "Fetches stock data...",
"parameters": {
"type": "object",
"properties": {
"stock_symbol": {
"type": "string",
"description": "The stock ticker symbol. The user mentioned {var[\"stock_symbol\"]}."
},
"alphavantage_key": {
"type": "string",
"description": "The API key for the service. Use the key stored in the {var[\"alphavantage_key\"]} variable."
}
},
"required": ["stock_symbol", "alphavantage_key"]
}
}