Shared Knowledge MCP Server enables AI assistants to access and retrieve information from various vector stores, supporting RAG (Retrieval Augmented Generation) workflows. The implementation supports multiple vector store backends including HNSWLib, Weaviate, and others, with a flexible architecture that allows easy switching between them through environment variables. Built with TypeScript and LangChain, it provides a unified interface for knowledge retrieval regardless of the underlying storage technology, making it particularly valuable for applications that need to augment AI responses with domain-specific knowledge without requiring complex integration work for each vector database type.
No reviews yet. Be the first to review!
Sign in to join the conversation
Search for information from the knowledge base. Parameters: query (string, required), limit (number, optional, default: 5), context (string, optional), filter (object, optional), include (object, optional).