An MCP server that enables semantic search capabilities through Qdrant vector database integration. It allows AI assistants to retrieve semantically similar documents across multiple collections using natural language queries, with configurable result counts and collection source tracking. The server supports both stdio and HTTP transports, includes REST API endpoints with OpenAPI documentation, and uses embedding models like Xenova/all-MiniLM-L6-v2 to generate vector representations for similarity matching. Particularly useful for knowledge retrieval workflows where semantic understanding is more important than exact keyword matching.
Retrieves semantically similar documents from multiple Qdrant vector store collections based on multiple queries. Inputs: collectionNames (string[]): Names of the Qdrant collections to search across, topK (number): Number of top similar documents to retrieve (default: 3), query (string[]): Array of query texts to search for. Returns: results: Array of retrieved documents with: query, collectionName, text, score.
No reviews yet. Be the first to review!
Sign in to join the conversation
Our bundler currently only supports TypeScript-based servers. Check back soon!