MCP-RAGDocs is a server implementation that provides semantic documentation search and retrieval using vector databases to augment LLM capabilities. Developed by hannesrudolph and forked by jumasheff, it enables AI assistants to search through stored documentation, extract URLs from web pages, manage documentation sources, and process queues of URLs for indexing. The server uses Qdrant for vector storage and supports multiple embedding providers including Ollama and OpenAI, making it particularly valuable for enhancing AI responses with relevant documentation context without requiring users to switch between interfaces.
Search through stored documentation using natural language queries. Returns matching excerpts with context, ranked by relevance. Inputs: query (string), limit (number, optional)
List all documentation sources currently stored in the system. Returns a comprehensive list of all indexed documentation including source URLs, titles, and last update times.
Extract and analyze all URLs from a given web page. This tool crawls the specified webpage, identifies all hyperlinks, and optionally adds them to the processing queue. Inputs: url (string), add_to_queue (boolean, optional)
Remove specific documentation sources from the system by their URLs. The removal is permanent and will affect future search results. Inputs: urls (string[])
List all URLs currently waiting in the documentation processing queue. Shows pending documentation sources that will be processed when run_queue is called.
Process and index all URLs currently in the documentation queue. Each URL is processed sequentially, with proper error handling and retry logic.
Remove all pending URLs from the documentation processing queue. Use this to reset the queue when you want to start fresh, remove unwanted URLs, or cancel pending processing.
No reviews yet. Be the first to review!
Sign in to join the conversation
Our bundler currently only supports TypeScript-based servers. Check back soon!