Using an AI agent? Copy the docs link for full API context:
https://docs.viggle.ai/llms-full.txt
Rendering modes
On-Demand
Preprocessed
Import Templates
Send an image + video in one request. On-demand renders cost 1 credit/second of output video. No separate character charge.
import requests, time
API_KEY , BASE = "YOUR_API_KEY" , "https://apis.viggle.ai"
h = { "Authorization" : f "Bearer { API_KEY } " }
job = requests.post( f " { BASE } /api/render" , headers = h, files = {
"ref_image" : open ( "character.png" , "rb" ),
"driving_video" : open ( "dance.mp4" , "rb" ),
}).json()
while True :
s = requests.get( f " { BASE } /api/render/ { job[ 'job_id' ] } " ).json()
if s[ "status" ] == "complete" : break
time.sleep( 3 )
video = requests.get(s[ "cdn_url" ])
open ( "output.mp4" , "wb" ).write(video.content)
On-Demand Guide Render options, polling, and error handling
Create character (1 credit) + scene (free) once, then render with IDs for lower latency. import requests
API_KEY , BASE = "YOUR_API_KEY" , "https://apis.viggle.ai"
h = { "Authorization" : f "Bearer { API_KEY } " }
char = requests.post( f " { BASE } /api/characters/preprocess" , headers = h,
files = { "image" : open ( "character.png" , "rb" )}, data = { "name" : "My Character" }).json()
scene = requests.post( f " { BASE } /api/scenes/preprocess" , headers = h,
files = { "video" : open ( "dance.mp4" , "rb" )}, data = { "name" : "Dance" }).json()
job = requests.post( f " { BASE } /api/render" , headers = h, data = {
"character_id" : char[ "character_id" ], "scene_id" : scene[ "scene_id" ],
}).json()
Preprocessing Guide Polling, download, and asset reuse
Use templates from viggle.ai instead of uploading a driving video. import requests
API_KEY , BASE = "YOUR_API_KEY" , "https://apis.viggle.ai"
h = { "Authorization" : f "Bearer { API_KEY } " , "Content-Type" : "application/json" }
scene = requests.post( f " { BASE } /api/scenes/import" , headers = h,
json = { "template_id" : "TEMPLATE_ID_FROM_VIGGLE" }).json()
Import Templates Guide Hot Picks, multi-person templates, character mapping
Model selection
Two avatar models are available:
Model Description V4_PreviewDefault model V3_PreviewAlternative avatar model with a dedicated GPU pool
On-demand renders: pass model on POST /api/render:
Preprocessed renders: pass model when creating the scene, not at render time. The model is baked into the scene metadata and automatically used for all renders with that scene:
# Preprocess scene with V3
curl -X POST "https://apis.viggle.ai/api/scenes/preprocess" \
-H "Authorization: Bearer YOUR_API_KEY" \
-F "[email protected] " -F "name=V3 Dance" -F "model=V3_Preview"
# Render — model is auto-detected from the scene
curl -X POST "https://apis.viggle.ai/api/render" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d "character_id=CHAR_ID" -d "scene_id=SCENE_ID"
A scene preprocessed on V3 cannot be rendered with V4 (and vice versa). The API returns an error if the models don’t match.
Quick start
Pick a mode
Use the Rendering modes tabs: On-Demand (fastest first render), Preprocessed (~3x faster at scale), or Import Templates .
Render
POST /api/render — poll GET /api/render/{job_id} every 3-5s until complete, then use cdn_url to download.
Resources
Discord Community Support and discussions
API Reference Full endpoint documentation
Pricing 1 credit/sec — about $0.01/sec