Skip to content

Knowledge Bases

Knowledge bases let you build searchable collections of documents that your AI workflow steps can reference. Upload files or add URLs, and QuickFlo automatically extracts the text, splits it into chunks, and creates vector embeddings — making the content available for semantic search during AI step execution.

  1. Create a knowledge base and add documents (files or URLs)
  2. Processing happens automatically — documents are parsed, chunked, and embedded
  3. Search happens at execution time — when an AI step runs, the user’s prompt is used to find the most relevant chunks from the selected knowledge bases
  4. Context injection — matched chunks are added to the AI model’s context, giving it access to your specific information

This pattern is called Retrieval Augmented Generation (RAG) — the AI generates responses grounded in your actual documents rather than relying solely on its training data.

  1. Navigate to Knowledge Bases in the QuickFlo sidebar
  2. Click New Knowledge Base
  3. Enter a name and optional description
FieldDescription
NameA unique name within your organization (max 255 characters)
DescriptionOptional context about what this KB contains (max 2,000 characters)
Knowledge base page showing documents with processing status, size, and type indicators Add Documents dialog showing file upload with drag-and-drop and a selected file

Upload documents directly from your computer. Supported file types:

FormatDescription
PDFScanned and text-based PDFs
DOCXMicrosoft Word documents
PPTXPowerPoint presentations
HTMLWeb pages saved as HTML
TXTPlain text files
MDMarkdown files
CSVComma-separated value files
Add Documents dialog showing URL input with multiple URLs entered

Add documents by URL — QuickFlo fetches the content automatically:

  1. In your knowledge base, click Add from URL
  2. Enter one or more URLs
  3. QuickFlo fetches each URL, extracts the text content, and processes it

After adding a document, it goes through an automatic processing pipeline:

StatusMeaning
PendingQueued for processing
ProcessingBeing parsed, chunked, and embedded
ReadySuccessfully processed and searchable
FailedProcessing encountered an error (see error message for details)

Processing involves three stages:

  1. Parse — Extract text from the document based on its format
  2. Chunk — Split the text into semantic segments optimized for search
  3. Embed — Generate vector embeddings for each chunk using OpenAI’s text-embedding-3-small model

Knowledge bases are used through the LLM Call step. When configuring an LLM call, you can select one or more knowledge bases to search.

Configuring an LLM Call with Knowledge Bases

Section titled “Configuring an LLM Call with Knowledge Bases”
FieldDescription
Knowledge BasesSelect one or more knowledge bases from the dropdown
PromptYour prompt to the AI model — this is also used as the search query
ModelThe AI model to use for generation

When the LLM Call step runs with knowledge bases selected:

  1. The prompt text is used as a search query
  2. QuickFlo finds the most relevant chunks across all selected knowledge bases using vector similarity search (cosine distance)
  3. The top matching chunks (up to 5 by default) are prepended to the system prompt as context
  4. The AI model generates its response with access to the retrieved context

Only chunks with a relevance score above the minimum threshold are included, ensuring the AI receives high-quality context.

If you have a knowledge base called “Product Docs” containing your product documentation, an LLM Call step with:

FieldValue
Knowledge BasesProduct Docs
Prompt{{ initial.question }}

The step will:

  1. Search “Product Docs” for chunks relevant to the user’s question
  2. Include the most relevant documentation excerpts as context
  3. Generate an answer grounded in your actual product documentation

The LLM Call step output contains the AI’s response:

{{ answer-question.text }} // the generated response
{{ answer-question.usage.total }} // total tokens used
  • File documents: Delete the old document and upload the updated version
  • URL documents: Click Resync to re-fetch and re-process the URL content

Deleting a document removes it and all its chunks from search results. The change takes effect immediately — subsequent AI step executions will no longer find content from deleted documents.

Deleting a knowledge base removes all its documents and chunks permanently. Any workflows referencing the deleted KB will no longer have that knowledge base context available.

LimitValue
Knowledge base name255 characters
Description2,000 characters
Document name255 characters (unique within KB)
Embedding modeltext-embedding-3-small (1,536 dimensions)
Search results per queryUp to 5 chunks
Minimum relevance score0.3 (cosine similarity)