Skip to main content

Fal AI

Fal AI provides fast, scalable access to state-of-the-art image generation models including FLUX, Stable Diffusion, Imagen, and more.

Overviewโ€‹

PropertyDetails
DescriptionFal AI offers optimized infrastructure for running image generation models at scale with low latency.
Provider Route on LiteLLMfal_ai/
Provider DocFal AI Documentation โ†—
Supported Operations/images/generations

Setupโ€‹

API Keyโ€‹

import os

# Set your Fal AI API key
os.environ["FAL_AI_API_KEY"] = "your-fal-api-key"

Get your API key from fal.ai.

Supported Modelsโ€‹

Model NameDescriptionDocumentation
fal_ai/fal-ai/flux-pro/v1.1-ultraFLUX Pro v1.1 Ultra - High-quality image generationDocs โ†—
fal_ai/fal-ai/imagen4/previewGoogle's Imagen 4 - Highest quality modelDocs โ†—
fal_ai/fal-ai/recraft/v3/text-to-imageRecraft v3 - Multiple style optionsDocs โ†—
fal_ai/fal-ai/stable-diffusion-v35-mediumStable Diffusion v3.5 MediumDocs โ†—
fal_ai/bria/text-to-image/3.2Bria 3.2 - Commercial-grade generationDocs โ†—

Image Generationโ€‹

Usage - LiteLLM Python SDKโ€‹

Basic Image Generation
import litellm
import os

# Set your API key
os.environ["FAL_AI_API_KEY"] = "your-fal-api-key"

# Generate an image
response = litellm.image_generation(
model="fal_ai/fal-ai/flux-pro/v1.1-ultra",
prompt="A serene mountain landscape at sunset with vibrant colors"
)

print(response.data[0].url)

Usage - LiteLLM Proxy Serverโ€‹

1. Configure your config.yamlโ€‹

Fal AI Image Generation Configuration
model_list:
- model_name: flux-ultra
litellm_params:
model: fal_ai/fal-ai/flux-pro/v1.1-ultra
api_key: os.environ/FAL_AI_API_KEY
model_info:
mode: image_generation

- model_name: imagen4
litellm_params:
model: fal_ai/fal-ai/imagen4/preview
api_key: os.environ/FAL_AI_API_KEY
model_info:
mode: image_generation

- model_name: stable-diffusion
litellm_params:
model: fal_ai/fal-ai/stable-diffusion-v35-medium
api_key: os.environ/FAL_AI_API_KEY
model_info:
mode: image_generation

general_settings:
master_key: sk-1234

2. Start LiteLLM Proxy Serverโ€‹

Start Proxy Server
litellm --config /path/to/config.yaml

# RUNNING on http://0.0.0.0:4000

3. Make requestsโ€‹

Generate via Proxy - OpenAI SDK
from openai import OpenAI

client = OpenAI(
base_url="http://localhost:4000",
api_key="sk-1234"
)

response = client.images.generate(
model="flux-ultra",
prompt="A beautiful sunset over the ocean",
n=1,
size="1024x1024"
)

print(response.data[0].url)

Using Model-Specific Parametersโ€‹

LiteLLM forwards any additional parameters directly to the Fal AI API. You can pass model-specific parameters in your request and they will be sent to Fal AI.

Pass Model-Specific Parameters
import litellm

# Any parameters beyond the standard ones are forwarded to Fal AI
response = litellm.image_generation(
model="fal_ai/fal-ai/flux-pro/v1.1-ultra",
prompt="A beautiful sunset",
# Model-specific Fal AI parameters
aspect_ratio="16:9",
safety_tolerance="2",
enhance_prompt=True,
seed=42
)

For the complete list of parameters supported by each model, see:

Supported Parametersโ€‹

Standard OpenAI-compatible parameters that work across all models:

ParameterTypeDescriptionDefault
promptstringText description of desired imageRequired
modelstringFal AI model to useRequired
nintegerNumber of images to generate (1-4)1
sizestringImage dimensions (maps to model-specific format)Model default
api_keystringYour Fal AI API keyEnvironment variable

Getting Startedโ€‹

  1. Sign up at fal.ai
  2. Get your API key from your account settings
  3. Set FAL_AI_API_KEY environment variable
  4. Choose a model from the Fal AI model gallery
  5. Start generating images with LiteLLM

Additional Resourcesโ€‹