This MCP implementation, developed using TypeScript, provides a robust foundation for building and deploying web scraping and automation projects. It leverages the Apify platform and Crawlee library, offering a structured environment for creating scalable web crawlers and data extraction tasks. The implementation includes configuration files for ESLint, TypeScript, and Docker, ensuring code quality and consistency across different development environments. By abstracting common web scraping challenges and providing integration with Apify's cloud infrastructure, this tool enables developers to focus on building complex data acquisition workflows. It is particularly useful for projects requiring large-scale web data extraction, automated testing of web applications, or building AI training datasets from web sources.
아직 리뷰가 없습니다. 첫 번째 리뷰를 작성해 보세요!
대화에 참여하려면 로그인하세요
Search for Actors in the Apify Store.
Retrieve detailed information about a specific Actor.
Call an Actor and get its run results. Use fetch-actor-details first to get the Actor's input schema.
An Actor tool to browse the web.
Search the Apify documentation for relevant pages.
Fetch the full content of an Apify documentation page by its URL.
Get detailed information about a specific Actor run.
Get a list of an Actor's runs, filterable by status.
Retrieve the logs for a specific Actor run.
Get metadata about a specific dataset.
Retrieve items from a dataset with support for filtering and pagination.
Generate a JSON schema from dataset items.
Get metadata about a specific key-value store.
List the keys within a specific key-value store.
Get the value associated with a specific key in a key-value store.
List all available datasets for the user.
List all available key-value stores for the user.
Add an Actor as a new tool for the user to call.
Retrieve the output from an Actor call which is not included in the output preview of the Actor tool.