This MCP server for web content scanning and analysis, developed using TypeScript, provides tools for extracting and processing web page content. It leverages libraries like Cheerio for HTML parsing and Turndown for HTML-to-Markdown conversion, offering capabilities to fetch, analyze, and transform web content. The implementation is designed to integrate seamlessly with AI-assisted workflows, enabling tasks such as web scraping, content summarization, and data extraction. It's particularly useful for researchers, content creators, and developers who need to automate web content analysis, generate structured data from websites, or incorporate web-based information into their AI applications.
Fetches a web page and converts it to Markdown. Parameters: url (required): URL of the page to fetch, selector (optional): CSS selector to target specific content.
Extracts all links from a web page with their text. Parameters: url (required): URL of the page to analyze, baseUrl (optional): Base URL to filter links, limit (optional, default: 100): Maximum number of links to return.
Recursively crawls a website up to a specified depth. Parameters: url (required): Starting URL to crawl, maxDepth (optional, default: 2): Maximum crawl depth (0-5).
Checks for broken links on a page. Parameters: url (required): URL to check links for.
Finds URLs matching a specific pattern. Parameters: url (required): URL to search in, pattern (required): JavaScript-compatible regex pattern to match URLs against.
Generates a simple XML sitemap by crawling. Parameters: url (required): Root URL for sitemap crawl, maxDepth (optional, default: 2): Maximum crawl depth for discovering URLs (0-5), limit (optional, default: 1000): Maximum number of URLs to include in the sitemap.
No reviews yet. Be the first to review!
Sign in to join the conversation
Start the server with node to access it via any client or IDE.
node path/to/downloaded/file.mjs