Seedance2

Seedance 2.0 Image-to-Video Guide

Image-to-video in Seedance 2.0 starts from one or more reference images plus a text prompt. Based on public documentation, you can use up to 9 images in a single request. The model uses the image(s) for appearance and composition and the text for action, camera, and style. This is the main workflow for product demos, character-driven clips, and “character lock” consistency.

Last updated:

Last verified:

Refresh cadence: Every few days

Source basis and reading boundary

These guides are written as third-party reference summaries, not official product documentation or support content.

Sources used

Re-checked against the current ByteDance Seedance 2.0 project page, the Seed Models page, Dreamina help/resources, and BytePlus / ModelArk docs on March 24, 2026.

Boundary

Use these pages to understand public claims, common workflows, and terminology. Do not read them as official support, authorization, or product-owner statements.

Timeliness

Access routes, input limits, queue behavior, pricing, and API availability can change by surface. Treat Dreamina, BytePlus / ModelArk, and partner routes as separate products until current docs confirm otherwise.

Source basis

This page summarizes publicly available materials. Specs, pricing, and access may change, so verify with primary sources before making decisions.

Upload and describe

Upload your reference image(s)—subject to platform limits (e.g. 9 images, often up to 30 MB each). Then write a prompt that describes motion, camera, and style. The model anchors appearance from the image and drives action from the text. Use the @ tag system if your interface supports it (e.g. “@Image1 as character reference”) to clarify which image is for character, which for scene, etc.

Multi-shot from one reference

To keep the same character or product across shots, reuse the same reference image in each generation and in the prompt (e.g. “same character as reference, keep outfit and expression consistent”). Public reports suggest this “character lock” behavior is one of Seedance 2.0’s strengths for ads and short narratives.

Character consistency tips

Third-party guides suggest: use 1–2 reference images (more can increase drift); ensure the character occupies 60–80% of the frame; keep lighting and angle consistent across references; use immutable anchors in the prompt (face, hair, clothing) and repeat them to reduce feature erosion. For 4–6 second clips, one intentional change per clip (e.g. one camera move) often yields more stable output.

Examples & sources

Product shot to rotation

Per public docs and third-party tutorials, upload a product reference image and describe rotation, zoom, or a simple scene. High-resolution, well-lit single-subject images tend to work better.

@Image1 as character reference, product slowly rotating, clean background, slow dolly in, 2K.
Dreamina experience

Frequently asked questions

What image format and size work best?

Check your platform’s or the official Seedance 2.0 documentation for current limits (e.g. file size, aspect ratio). In general, high-resolution, well-lit images with a clear subject produce better results. Avoid heavily watermarked or low-quality sources.

How do I keep character consistent across shots?

Reuse the same reference image in each generation and reference it in the prompt (e.g. 'same character as reference'). Third-party guides suggest using 1–2 references, keeping the subject at 60–80% of the frame, and repeating identity anchors in the prompt. See our best practices guide for more.

Seedance 2.0 Best Practices — Pro Tips for Better Video Output

Why does my character drift between shots?

Common causes: too many reference images, inconsistent lighting, or missing 'same character' in the prompt. Use one clear reference per character and refer to it in every shot. See our character consistency tips in the image-to-video guide.

Character & Style Consistency

How many reference images should I use?

Public docs allow up to 9, but third-party guides often recommend 1–2 for character consistency to reduce drift. Use more only when you need multiple angles or scenes.

What causes image-to-video generation to fail?

According to public reports, common causes include vague prompts, poor reference image quality or inconsistent lighting, or missing motion or camera descriptions. Contradictory instructions (e.g. simultaneous close-up and wide shot) can also cause failures. See our best practices guide for more.

Seedance 2.0 Best Practices — Pro Tips for Better Video Output

Why does the model ignore my uploaded references?

The most common cause is forgetting to mention the uploaded files with @ tags in your prompt. Upload your image, then write '@Image1 as the character' or similar in the prompt text. Without an explicit @ tag, the model may not use your reference.

How many times should I generate before giving up?

AI video generation is stochastic — each run may produce different results. Public guides recommend generating at least 3-5 times per prompt before adjusting. Keep the best takes, then refine your prompt based on what works.

Related capabilities

Related guides

Explore more guides

Reviewer
Reviewed by Seedance2 Editorial Team
Last reviewed
Content basis
Third-party compilation from public sources

This content is compiled from publicly available materials and does not represent official product documentation.