APIPod Docs
HomeModels
Language
Language
  • 中文
  • English
HomeModels
Language
Language
  • 中文
  • English
    • Getting Started with APIPod API
    • Image Models
      • Nano Banana
        • Nano Banana 2
        • Nano Banana Pro
      • Seedream 4.5
        • Seedream V4.5 Text to Image
        • Seedream V4.5 Image to Image
      • Seedream 5.0 Lite
        • Seedream 5.0 Lite Text to Image
        • Seedream 5.0 Lite Image to Image
      • WAN 2.7
        • WAN 2.7 Text to Image
        • WAN 2.7 Image to Image
        • WAN 2.7 Text to Image Pro
        • WAN 2.7 Image to Image Pro
      • Query Image Task
        GET
    • Video Models
      • Veo 3.1
        • Veo 3.1 Fast
      • Seedance 2.0
        • Seedance 2.0 Text to Video
        • Seedance 2.0 Image to Video
        • Seedance 2.0 Reference to Video
        • Seedance 2.0 Fast Text to Video
        • Seedance 2.0 Fast Image to Video
        • Seedance 2.0 Fast Reference to Video
    • Schemas
      • NanoBanana
      • Seedream4.5
      • Seedream5.0
      • SubmitTaskResponse
      • TaskDetailResponse
      • WAN2.7Image
      • WAN2.7ImageEdit
      • WAN2.7ImagePro
      • WAN2.7ImageEditPro
      • Veo3.1Fast
      • Seedance2.0TextToVideo
      • Seedance2.0ImageToVideo
      • Seedance2.0ReferenceToVideo
      • Seedance2.0FastTextToVideo
      • Seedance2.0FastImageToVideo
      • Seedance2.0FastReferenceToVideo

    Getting Started with APIPod API

    APIPod is a model aggregation and scheduling infrastructure designed specifically for future-ready AI-native applications. We connect global top-tier LLMs via OpenAI & Anthropic、Gemini compatible interfaces and provide Standardized API Definitions for Image, Video, Audio, and other models, delivering enterprise-grade high availability.
    In today's fragmented AI ecosystem, developers often face challenges such as inconsistent interfaces (especially for multi-modal models), unstable services, and complex billing. APIPod aims to solve these pain points, allowing you to focus on building great products rather than maintaining infrastructure.
    APIPod API Docs
    We aim to be transparent, practical, and developer-friendly. Please read this carefully before going to production.

    1. Why Choose APIPod?#

    We are not just an API proxy, but an intelligent, fully multi-modal AI traffic scheduling center.
    Multi-Modal Aggregation
    Beyond text, we deeply integrate multi-modal models like Nano Banana, Veo, Runway, Luma, Suno, empowering your creative applications in one stop.
    Industrial Standard API
    LLM services are fully compatible with OpenAI & Anthropic interface standards, allowing zero-cost migration for existing projects; for multi-modal models like image and video, we designed a Unified Standardized Interface to smooth out parameter differences between vendors.
    Enterprise-Grade SLA
    Built-in multi-level Smart Fallback mechanism automatically switches to backup links seamlessly when an upstream service fluctuates.
    Intelligent Cost Control
    Based on real-time bidding and performance monitoring, our intelligent routing algorithm automatically finds the lowest cost call path for you while ensuring quality.
    Cloud Native Architecture
    Elastic architecture built on Kubernetes, leveraging container orchestration for auto-scaling and self-healing to guarantee business stability under extreme concurrency from the bottom up.
    Full-Link Observability
    Provides usage analysis, latency monitoring, and cost auditing down to the Token level, helping you deeply understand your business operation status.

    2. 3-Minute Quick Integration#

    APIPod's design philosophy is "Plug and Play". Follow these steps to immediately access super AI capabilities for your application.
    1
    Create Account & Credentials
    Go to the APIPod Console to sign up, and generate your first API Key (sk-...) on the Key Management page.
    2
    Configure Your Application
    For LLM models, APIPod is compatible with both OpenAI and Anthropic SDKs. You can freely choose the official library based on your project requirements.
    Python
    Node.js
    cURL
    3
    Explore More Models
    After successful calling, you can visit the Model Plaza to view the hundreds of models we support:
    Text (LLM): GPT-4o, Claude 4.6 Sonnet, Gemini 3 Pro Preview...
    Image: Nano Banana, Seedream...
    Video: Veo 3.1...
    Audio/Music: Suno (Coming Soon)

    3. Available Models & Playground#

    You can find the latest supported models on our Market page:
    👉 https://www.apipod.ai/models
    We continuously update and onboard new models as soon as they are stable.
    Each model page links to its Playground, where you can test and experiment directly in our UI before calling the API.
    The Playground is the best place to understand model behavior, parameters, and output formats.

    4. Pricing#

    The complete and up-to-date pricing list is available here:
    👉 https://www.apipod.ai/pricing
    Our prices are typically 0%–90% lower than official APIs.
    For some models, discounts can reach up to 90%.
    Pricing may change as upstream providers adjust their costs, so always refer to the pricing page for the latest numbers.

    5. Creating and Securing Your API Key#

    Create and manage your API keys here:
    👉 https://www.apipod.ai/console/api-keys
    Important security notes:
    Never expose your API key in frontend code (browser, mobile apps, public repositories).
    Treat your API key as a secret.
    To help protect your usage, we provide:
    Rate limits per key (hourly, daily, and total usage caps)
    IP whitelist support, allowing only approved server IPs to access the API
    These features help prevent accidental overuse and unauthorized requests.

    6. Required Request Headers#

    Every API request must include the correct headers:
    If these headers are missing or incorrect, you may receive:
    {"code":401,"msg":"You do not have access permissions"}
    Always double-check your headers when debugging authentication issues.

    7. Logs & Task Details#

    You can inspect all your historical tasks here:
    👉 https://www.apipod.ai/console/logs
    For each task, you can view:
    Creation time
    Model used
    Input parameters
    Task status
    Credit consumption
    Final results or error details
    If you ever suspect incorrect credit usage, this page is the source of truth for verification.

    8. Data Retention Policy#

    Please note our retention rules:
    Generated media files: stored for 14 days, then automatically deleted
    Log records (text / metadata): stored for 2 months, then automatically deleted
    If you need long-term access, make sure to download and store results on your side in time.

    9. Asynchronous Task Model#

    All generation tasks on KIE are asynchronous.
    A successful request returns:
    HTTP 200
    A task_id
    Status Verification
    A 200 OK response only means the task was successfully created. It does not mean the task is completed.
    To get the final result, you must either:
    Provide a callback (webhook) URL in the request, or
    Actively poll the "query record info" API using the task_id

    10. Rate Limits & Concurrency#

    By default, we apply the following limits:
    Up to 20 new generation requests per 10 seconds
    This typically allows 100+ concurrent running tasks
    Limits are applied per account
    If you exceed the limit:
    Requests will be rejected with HTTP 429
    Rejected requests will not enter the queue
    For most users, this is more than sufficient.
    If you consistently hit 429 errors, you may contact support to request a higher limit — approvals are handled cautiously.

    11. Developer Support#

    The recommended support channels are available directly from the dashboard (bottom-left menu):
    Get help on Discord
    Get help on Telegram
    What you get:
    Private, 1-on-1 channels
    Your data and conversations remain confidential
    Faster and more technical responses
    Support hours:
    UTC 21:00 – UTC 17:00 (next day)
    You may also email us at [email protected], but this is not the preferred or fastest option.

    12. Stability Expectations#

    We provide access to top-tier, highly competitive APIs at very aggressive pricing.
    That said:
    We are not perfect
    Our overall stability may be slightly lower than official providers
    This is a conscious trade-off
    In practice, KIE is stable enough to support production workloads and long-term business growth, but we believe in setting realistic expectations upfront.

    13. About the Team#

    APIPod is built by a small startup team.
    We move fast
    We care deeply about developer experience
    We are constantly improving
    At the same time, we acknowledge that:
    Not everything is perfect
    We can't satisfy every use case immediately
    Your feedback helps us improve — and we genuinely appreciate it.
    Modified at 2026-04-05 01:52:35
    Next
    Nano Banana 2
    Built with