Use this file to discover all available pages before exploring further.
Noorle agents use a three-tier memory system that balances efficiency, context retention, and cost. This design prevents token waste and ensures agents remember important information across long conversations.
Storage: Redis (in-memory cache)
Lifetime: Hours to days
Scope: Last N messages (configurable, default 20)
Access: Every inferenceWhat it contains:
Most recent user messages
Agent responses
Tool call results
Current context and state
Example:
Working Memory (last 5 messages)├─ User: "Search for AI trends"├─ Agent: "I'll search the web..."├─ Tool Call: web_search(query="AI trends")├─ Tool Result: ["Article 1: ...", "Article 2: ..."]└─ Agent: "Based on the search, here are trends..."
Storage: Object storage + Redis cache
Lifetime: Days to months
Scope: Summarized sessions and key insights
Access: As needed (cached for 1 hour)What happens:
When working memory reaches capacity (20 messages), old messages are summarized:
Situation: Working memory is full (20 messages) New message arrives Need to make roomProcess: 1. Compress oldest 10 messages into summary "User asked about AI trends. Agent researched and identified 3 key developments: 1) Multimodal models improving, 2) Cost decreasing, 3) Enterprise adoption accelerating." 2. Store summary in object storage with metadata { "id": "summary-123", "time_range": "2024-03-01 to 2024-03-05", "summary": "...", "key_facts": ["...", "..."], "embedding": [0.12, 0.34, ...] } 3. Cache summary in Redis 4. Discard original messages from working memory 5. Working memory now has space for new message
User conversation over 1 month: 1000+ messagesDay 1-5: Working memory captures 20 messages Discussions about AI trends, pricing, implementationDay 6-10: Messages M1-M10 summarized Summaries stored, original messages removed New messages M21-M30 added to working memoryDay 20: Users asks: "What was the conclusion about pricing?" Agent searches summaries (not working memory) Finds: "User concerned about costs. Recommended tiered pricing model." Agent answers: "Based on our discussion, we concluded that tiered pricing balances cost and features."Month 2: All summaries and archives available but working memory only contains recent chat Token usage: minimal, cost: lowYear 2: Audit request: "Show all conversations from 2024" Retrieve from archive: 365 days of history Complete transcript with all messages Used for compliance verification