Seedance2

Seedance 2.0 Tutorial — How to Use It Step by Step

This tutorial covers the main ways to use Seedance 2.0: text-to-video (no reference) and image-to-video (with reference images). Steps are based on public tutorials and the official launch material; your platform’s UI may differ.

Last updated:

Last verified:

Refresh cadence: Every few days

Source basis and reading boundary

These guides are written as third-party reference summaries, not official product documentation or support content.

Sources used

Re-checked against the current ByteDance Seedance 2.0 project page, the Seed Models page, Dreamina help/resources, and BytePlus / ModelArk docs on March 24, 2026.

Boundary

Use these pages to understand public claims, common workflows, and terminology. Do not read them as official support, authorization, or product-owner statements.

Timeliness

Access routes, input limits, queue behavior, pricing, and API availability can change by surface. Treat Dreamina, BytePlus / ModelArk, and partner routes as separate products until current docs confirm otherwise.

Source basis

This page summarizes publicly available materials. Specs, pricing, and access may change, so verify with primary sources before making decisions.

Prompt templates

Prompt template cluster

Use the dedicated prompt-template cluster for reusable templates, daily Input / Output updates, and future media evidence.

Coming soon (no assets yet)

Explore prompt templates

Step 1: Choose your mode

Text-to-video: enter only a text prompt; good for quick drafts and when you don’t have reference assets. Image-to-video: upload one or more reference images (up to 9 in one request, per public docs) and add a text prompt; use this for character or product consistency and when you want to “animate” a still.

Step 2: Write your prompt

Many public guides suggest a structure like: subject + action + environment + camera movement + light/mood + style + (optional) sound. Be specific: avoid vague words like “beautiful” or “cool”; include motion and camera (e.g. “slow dolly in,” “tracking shot”). For multi-shot or consistent character, reference your uploads in the prompt (e.g. “same character as reference”). See our prompt writing guide for formulas and examples.

Step 3: Set duration and resolution

Seedance 2.0 typically allows 4–15 seconds and multiple resolutions up to 2K (2048×1080). Third-party reports often mention 2K cinema-style output taking around 45–60 seconds to generate; simple text-to-video may complete in around 30 seconds. Choose duration and resolution in your platform’s settings before generating. Shorter clips and lower resolution generally finish faster.

Step 4: Generate and refine

Submit the job and wait for the result. You can then use video extension to continue the clip or use editing features to modify parts of the video. For a multi-shot story, generate the next shot with the same character reference and consistent style.

Input/Output daily update workflow (coming soon)

For this prompt-template rollout, treat every run as Input/Output evidence: Input = tested prompt, references, and settings; Output = observed quality notes, image/video links, and pass/fail rubric. We will attach daily materials to this section once assets are available.

Examples & sources

Text-to-video starter example

Per public tutorials, beginners can try a simple text-to-video: subject + action + environment + camera + style. Results depend on the platform.

An orange cat napping on a windowsill, sunlight streaming in, slow dolly in, warm mood, 2K.
ByteDance Seedance 2.0 project page

Daily template run (coming soon)

Input: selected prompt template + fixed quality rubric. Output: screenshot/video proof + notes on stability and style fit. Placeholder entry until the first daily asset package is uploaded.

[Input] template-id + prompt + references + settings -> [Output] pass/fail notes + evidence links (coming soon).

Frequently asked questions

How long does it take to generate a video?

It depends on length, resolution, and server load. Public reports often cite about 45–60 seconds for 2K cinema-style clips; shorter or lower-resolution clips may finish sooner.

Can I use my own images?

Yes. Image-to-video and multi-shot workflows support your own reference images (subject to platform limits, e.g. up to 9 images). Ensure you have rights to use them and comply with the platform’s content policy.

How fast is generation?

Based on third-party reports, simple text-to-video may complete in around 30 seconds; 2K cinema-style clips often take 45–60 seconds. Shorter or lower-resolution clips may finish sooner. Actual times depend on server load and platform.

What is video extension?

Video extension continues an existing clip beyond its original length. The model uses the last frames as reference. See our capabilities page for more on this feature.

Video Extension

Where can I learn prompt formulas?

See our prompt writing guide for the seven-part structure, @ reference tags, and examples. Our best practices guide covers common pitfalls.

Seedance 2.0 Prompt Writing Tips — How to Write Better Video Prompts

What is the difference between Jimeng and Dreamina?

即梦 (Jimeng) is the China-mainland host surface accessed via Douyin login; Dreamina is the international surface accessed via TikTok, Google, or email. Both run Seedance 2.0, but UI, pricing, content-moderation policies, and feature rollouts can differ. Always confirm which surface the tutorial you are following was written for.

Related capabilities

Related guides

Explore more guides

Reviewer
Reviewed by Seedance2 Editorial Team
Last reviewed
Content basis
Third-party compilation from public sources

This content is compiled from publicly available materials and does not represent official product documentation.