Skip to content

AI Steps

QuickFlo includes two AI step types that bring large language models directly into your workflows. Use them to generate text, extract structured data, classify content, summarize documents, or run autonomous agents that call other workflows as tools.

Both steps support multiple AI providers — Anthropic (Claude), OpenAI (GPT), and Google (Gemini) — so you can choose the model that best fits your use case and swap between them without changing your workflow logic.

The LLM Call step (ai.llm-call) sends a prompt to an AI model and returns the response. It supports two output modes: text generation for freeform responses, and structured output for extracting typed JSON that matches a schema you define.

LLM Call step editor showing Anthropic provider, model selection, structured output mode with system prompt, user prompt, knowledge bases, and output schema
FieldDescription
ProviderAnthropic, OpenAI, or Google
ConnectionAn API key connection for the selected provider
ModelSelect from the dropdown or type a specific model version ID
Enable Web SearchAllow the model to search the web for current information (Anthropic and Google only)
FieldDescription
System PromptSets the AI’s behavior and context — define a persona, rules, or formatting instructions (optional)
PromptThe user message sent to the model — describe what you want generated or extracted. Supports template expressions.

Select one or more knowledge bases to provide document context. When selected, QuickFlo searches the knowledge bases using the prompt text and injects the most relevant chunks into the model’s context — giving the AI access to your specific documents (RAG).

Add images or documents for the model to analyze. Each attachment can be a URL, base64-encoded content, or a data URI. Supported formats include PNG, JPEG, GIF, WebP, PDF, and more.

The default mode — the model generates a freeform text response. Use this for summarization, content generation, classification, question answering, or any task where you want natural language output.

Generate typed JSON matching a schema you define. The model is constrained to return data in exactly the structure you specify — no parsing or cleanup needed.

Output Schema section showing field definitions with value format tooltip explaining type prefixes for string, number, boolean, and array types

Define your schema using name-value pairs in the Output Schema section. Each field has a name (the JSON key path) and a value that specifies the type and description:

Value FormatTypeExample
descriptionstring (default)Business name
number: descriptionnumbernumber: Phone number
boolean: descriptionbooleanboolean: Is verified
string[]: descriptionarray of stringsstring[]: List of tags
number[]: descriptionarray of numbersnumber[]: Monthly revenue figures

Use dot notation and array notation in field names to define nested structures:

NameValueResulting JSON Path
leads[0].company_namestring: Business nameleads[].company_name
leads[0].phone_numberstring: Phone numberleads[].phone_number
summary.totalnumber: Total countsummary.total
FieldDefaultDescription
Temperature0.7 (text) / 0.3 (structured)Controls randomness — lower values produce more deterministic output
Max Tokens4,096Maximum tokens in the response (up to 32,768)
Timeout120sMaximum wait time for the model response (up to 5 minutes)
Execution output panel showing a completed LLM Call with generated text, token usage breakdown, model, provider, and finish reason

Reference the LLM Call output in later steps:

{{ my-llm-call.text }} // generated text (text mode)
{{ my-llm-call.object }} // structured JSON (structured output mode)
{{ my-llm-call.object.leads }} // access nested fields
{{ my-llm-call.usage.totalTokens }} // total tokens consumed
{{ my-llm-call.usage.promptTokens }} // input tokens
{{ my-llm-call.usage.completionTokens }} // output tokens
{{ my-llm-call.model }} // model ID used
{{ my-llm-call.provider }} // provider name
{{ my-llm-call.finishReason }} // "stop", "length", etc.

Extract structured business data from unstructured text:

FieldValue
ProviderAnthropic
Modelclaude-haiku-4-5-20251001
System PromptYou are a data extraction assistant. Extract ALL businesses mentioned into the leads array.
PromptExtract all business leads from this text: {{ search-for-leads.result }}
Output Schemaleads[0].company_nameBusiness name, leads[0].phone_numberPhone number

The step returns a clean JSON object with a leads array — ready to pipe into a for-each loop, data store, or API call.


The AI Agent step (ai.agent) runs an autonomous agent that works toward a goal you define. Unlike a single LLM Call, an agent can reason in multiple iterations and execute other workflows as tools — making decisions about what to do next based on the results of previous actions.

AI Agent step editor showing Anthropic provider, model selection, goal field, system prompt, tools section, knowledge bases, limits, and advanced settings

Same provider options as the LLM Call step — Anthropic, OpenAI, or Google with connection and model selection.

FieldDescription
GoalWhat the agent should accomplish — be specific about the desired outcome. Supports template expressions.
System PromptDefine the agent’s persona and behavior guidelines — rules, constraints, output format instructions (optional)

The Goal is the primary instruction — it tells the agent what to achieve. The System Prompt sets how the agent should behave while working toward that goal.

Tools are the agent’s capabilities — other workflow templates that the agent can call to take actions in the real world. Each tool is a workflow that the agent invokes with specific parameters and receives the workflow’s return values as a result.

Agent Tools configuration showing a workflow template selected as a tool with name, description, and parameter definitions
FieldDescription
Tool WorkflowSelect a workflow template — any workflow with the Template toggle enabled in Workflow Settings
Tool NameThe name the agent sees — use snake_case for best LLM compatibility (e.g., send_sms, create_ticket)
DescriptionTell the agent what this tool does and when to use it — the agent reads this to decide which tool to call
ParametersDefine the input parameters the agent should pass. If not set, parameters are inferred from the workflow template’s Input Schema.

Attach knowledge bases for the agent to search during its reasoning. The agent can query them autonomously when it needs information to accomplish its goal.

FieldDefaultDescription
Max Iterations10Maximum number of LLM calls before the agent stops (1–50)
Max Tokens32,000Total token budget across all iterations (up to 100,000)
Timeout120sMaximum total execution time for the entire agent run (up to 10 minutes)
FieldDefaultDescription
Temperature0.7Controls randomness in responses and tool selection
Return Intermediate StepsOffWhen enabled, includes all tool calls and their results in the step output — useful for debugging and auditing
{{ my-agent.result }} // final response from the agent
{{ my-agent.status }} // "completed", "max_iterations", "max_tokens", "timeout", or "error"
{{ my-agent.usage.totalTokens }} // total tokens across all iterations
{{ my-agent.usage.iterations }} // number of LLM calls made
{{ my-agent.usage.toolCalls }} // number of tool calls executed
{{ my-agent.durationMs }} // total execution time in milliseconds

When Return Intermediate Steps is enabled:

{{ my-agent.steps }} // array of all tool calls
{{ my-agent.steps[0].toolName }} // which tool was called
{{ my-agent.steps[0].arguments }} // arguments the agent passed
{{ my-agent.steps[0].result }} // what the tool returned
{{ my-agent.steps[0].durationMs }} // how long the tool call took
  1. The agent reads the goal, system prompt, and any knowledge base context
  2. It decides what to do — either respond directly or call a tool
  3. If it calls a tool, the corresponding workflow template executes with the parameters the agent chose
  4. The tool’s return values come back to the agent as a result
  5. The agent reads the result and decides its next action — call another tool, call the same tool with different parameters, or return a final response
  6. This loop continues until the agent achieves the goal, or hits a limit (max iterations, max tokens, or timeout)

An agent that takes a list of leads, cross-references each one against a CRM, and enriches the records with missing data from the web:

FieldValue
ProviderAnthropic
Modelclaude-sonnet-4-6
GoalFor each lead in {{ initial.leads }}, look them up in the CRM. If the lead exists, check for missing fields (phone, company size, industry). Search the web to fill in any gaps and update the CRM record.
System PromptYou are a data enrichment specialist. Always check the CRM first before searching externally. Only update fields that are empty or outdated. Return a summary of what you enriched.
Web SearchEnabled

Tools:

Tool NameWorkflowDescription
lookup_crm_contactcrm-tools.lookupLook up a contact in the CRM by email or name. Returns the existing record or null if not found.
update_crm_contactcrm-tools.updateUpdate a CRM contact record with new field values.

The agent iterates through each lead, queries the CRM to see what data already exists, searches the web to fill in gaps, and writes the enriched data back — adapting its approach based on what it finds for each lead.


Use CaseStep
Generate text from a promptLLM Call (text mode)
Extract structured data from textLLM Call (structured output)
Classify, summarize, or transform contentLLM Call
Answer questions using your documents (RAG)LLM Call with knowledge bases
Multi-step reasoning with tool useAI Agent
Autonomous research and data collectionAI Agent with web search
Orchestrate multiple workflows based on AI decisionsAI Agent with workflow tools
Tasks where the AI needs to adapt its approachAI Agent