An MCP server that enables semantic search capabilities through Qdrant vector database integration. It allows AI assistants to retrieve semantically similar documents across multiple collections using natural language queries, with configurable result counts and collection source tracking. The server supports both stdio and HTTP transports, includes REST API endpoints with OpenAPI documentation, and uses embedding models like Xenova/all-MiniLM-L6-v2 to generate vector representations for similarity matching. Particularly useful for knowledge retrieval workflows where semantic understanding is more important than exact keyword matching.
Aucun avis encore. Soyez le premier à donner votre avis !
Connectez-vous pour rejoindre la conversation
Retrieves semantically similar documents from multiple Qdrant vector store collections based on multiple queries. Inputs: collectionNames (string[]), topK (number), query (string[]). Returns: results array containing the query, collectionName, text, and score.