Documentation scraping server that enables AI assistants to extract structured content from web-based documentation through multiple crawling strategies. Built with Python and the crawl4ai library, it provides tools for single URL crawling, multi-URL batch processing, sitemap-based crawling, and menu-driven navigation extraction, with features like rate limiting, concurrent request handling, and robots.txt compliance. The implementation is particularly valuable for users who need to ingest documentation into AI systems while respecting site access policies and maintaining clean markdown output.
まだレビューはありません. 最初のレビューを投稿しましょう!
会話に参加するにはサインインしてください
Extracts content from a single documentation page and outputs clean Markdown format. Requires a target documentation URL.
Processes multiple URLs in parallel, generating individual Markdown files per page. Takes a file containing URLs or JSON output from the menu crawler as input.
Automatically discovers and crawls sitemap.xml, creating Markdown files for each page. Supports optional parameters for maximum recursion depth and URL patterns.
Extracts all menu links from documentation, outputting them in a structured JSON format. Can handle nested and dynamic menus with configurable selectors.