An MCP server that analyzes the cost of LLM API calls by tracking tokens used and calculating costs based on model pricing. It provides detailed cost breakdowns and usage statistics for better budget management and optimization of AI applications.
No explicit actions found
This MCP server may use standard commands or have its functionality documented in the README. Check the Setup or README tabs for more information.
아직 리뷰가 없습니다. 첫 번째 리뷰를 작성해 보세요!
대화에 참여하려면 로그인하세요