A simple way to give LLMs persistent memory across conversations. This server lets Claude or vscode remember information about you, your projects, and your preferences using a knowledge graph.
Tool to interact with a large language module (LLM). Supports adding context to the query using Retrieval-Augmented Generation(RAG). Context is built against an internal knowledge base. In this case, ...