This MCP server, developed by Hannes Rudolph, enables AI assistants to augment their responses with relevant documentation context through vector-based search and retrieval. Built as a fork of qpd-v's original implementation, it integrates with OpenAI for embeddings generation and Qdrant for vector storage. The server provides tools for adding documentation from URLs, performing semantic searches, extracting links, and managing a processing queue. By connecting AI capabilities with efficient vector search of documentation, this implementation allows AI systems to enhance their knowledge with domain-specific information in real-time. It is particularly useful for building documentation-aware AI assistants, implementing semantic documentation search, and creating context-aware developer tools that require access to up-to-date technical information.
Search through stored documentation using natural language queries. Returns matching excerpts with context, ranked by relevance. Inputs: query (string), limit (number, optional)
List all documentation sources currently stored in the system. Returns a comprehensive list of all indexed documentation including source URLs, titles, and last update times.
Extract and analyze all URLs from a given web page. Inputs: url (string), add_to_queue (boolean, optional)
Remove specific documentation sources from the system by their URLs. Inputs: urls (string[])
List all URLs currently waiting in the documentation processing queue. Shows pending documentation sources that will be processed when run_queue is called.
Process and index all URLs currently in the documentation queue. Each URL is processed sequentially, with proper error handling and retry logic.
Remove all pending URLs from the documentation processing queue. This operation is immediate and permanent.
No reviews yet. Be the first to review!
Sign in to join the conversation