Problem Solved
Without Knowledge Base
With Knowledge Base
How Knowledge Bases Work
Creating a Knowledge Base
Step 1: Create Knowledge Base- Word (DOCX)
- Plain text
- Markdown
- Web pages (URL)
Semantic Search
When an agent needs information, it searches: Why semantic search > keyword search:| Query | Keyword Match | Semantic Match |
|---|---|---|
| ”Can I get my money back?” | ✗ (no “refund”) | ✓ Finds refund policy |
| ”returning a product” | ✗ (no “return”) | ✓ Finds return policy |
| ”how long to send back” | ~ (partial) | ✓ Finds time window |
LLM Reranking
For better accuracy, search results can be reranked by an LLM:- Catches documents vector search misses
- Better for complex queries
- Trade-off: slower (LLM inference)
RAG in Action
Full RAG workflow:Configurating Knowledge Bases
Per-knowledge-base settings:Knowledge Base Management
Adding More Documents
Updating Documents
Removing Documents
Exporting Data
Embeddings
Noorle uses OpenAI embeddings by default:- Bring your own embedding model
- Use open-source embeddings (Hugging Face)
- Custom training on domain-specific data
Multi-KB Search
Search across multiple knowledge bases: Use cases:- Multi-product documentation
- Organizational wikis (separated by dept)
- Versioned documentation
Two Ways Agents Use Knowledge
There are two distinct paths for giving agents access to knowledge:Path 1: Automatic Context Injection (RAG)
Attach a knowledge base directly to an agent. When the agent runs, relevant documents are automatically retrieved and injected into the prompt context before the LLM generates a response. When to use: The agent always needs access to specific documentation. Every conversation benefits from having this knowledge available. Good for support agents, FAQ bots, and domain-specific assistants. How to set up: Configureknowledge_base_ids on the agent. The platform handles retrieval automatically at the start of each execution.
Path 2: Knowledge as a Tool
Attach the Knowledge Retrieval built-in capability to the agent. The agent getssearch, get_by_id, and list tools that it can call on demand during a conversation.
When to use: The agent only sometimes needs knowledge lookups. The agent should decide when and what to search. Good for general-purpose agents that handle a mix of tasks.
How to set up: Attach the Knowledge Retrieval capability in the agent’s settings. The agent decides when to invoke searches based on the conversation.
Combining Both Paths
You can use both paths together. The agent gets automatic context from attached knowledge bases and can also explicitly search additional knowledge bases via the tool. This is useful when there’s “always needed” knowledge (Path 1) plus “sometimes needed” reference material (Path 2).Best Practices
Chunk Intelligently
Chunk on semantic boundaries (paragraphs, sections). Overlap prevents loss of context.
Metadata Matters
Tag chunks with source, date, category. Helps filter and debug results.
Monitor Quality
Review search results regularly. Tune chunking and reranking based on quality.
Update Documents
Keep docs current. Old or incorrect docs harm RAG quality.
Common Patterns
Pattern: Customer Support Agent
Pattern: Internal Knowledge Hub
Pattern: Multi-Source Documentation
Troubleshooting
| Problem | Solution |
|---|---|
| Poor search results | Check chunks aren’t too large/small. Adjust to 512-1024 words. |
| Irrelevant matches | Enable LLM reranking to filter false positives. |
| Missing documents | Re-upload and re-embed. Check for content duplicates. |
| Slow searches | Increase max_results limit. Add indexes. |
| High costs | Disable reranking for non-critical searches. Batch embeddings. |
Next: Discover Omni Tool for intelligent tool discovery.