A server that provides text-to-image generation capabilities by integrating with Stable Diffusion WebUI (ForgeUI/AUTOMATIC-1111). It exposes a single tool for generating images with extensive parameter control including prompts, negative prompts, sampling steps, dimensions, and more. The implementation handles authentication, manages output directories, and embeds generation parameters as image metadata. Built for users who want to generate AI art through natural language requests while maintaining fine-grained control over the generation process.
No reviews yet. Be the first to review!
Sign in to join the conversation
Generate images using Stable Diffusion. Parameters: prompt (required), negative_prompt, steps (default: 4, range: 1-150), width (default: 1024, range: 512-2048), height (default: 1024, range: 512-2048), cfg_scale (default: 1, range: 1-30), sampler_name (default: 'Euler'), scheduler_name (default: 'Simple'), seed (-1 for random), batch_size (default: 1, max: 4), restore_faces, tiling, output_path.
Get list of available Stable Diffusion models. No parameters required.
Set the active Stable Diffusion model. Parameters: model_name (required).
Get list of available upscaler models. No parameters required.
Upscale one or more images using Stable Diffusion. Parameters: images (required), resize_mode (default: from env), upscaling_resize (default: from env), upscaling_resize_w (default: from env), upscaling_resize_h (default: from env), upscaler_1 (default: from env), upscaler_2 (default: from env), output_path.