Golden calligraphy particle
Creative Template & Effect Replication
Reference @video1 particle effect, @image1 font gradually appears at center of frame.
Cited / reported data
Use reference images or videos to replicate creative templates and complex visual effects accurately. How it works: provide a reference image or video that captures the style, rhythm, or visual treatment you want. The model reads motion patterns, color palette, composition, and visual texture from the reference, then generates new content in that style with your subject and scene. Use @image1 or @video1 tags to specify what serves as the style source. When to use this: scaling a proven ad template across products or markets; adapting a viral visual style for your brand; recreating a film-look treatment (vintage grain, cyberpunk neon, hand-painted animation) without manual post-production; personalizing a creative at scale for interactive campaigns where each user receives a styled variant. Tips and practical notes: you don't need professional VFX terminology — plain descriptions like 'match @video1's color mood and transition rhythm' work well. If you want only the color palette from one reference and the motion from another, split them: @image1 for color, @video1 for movement. The model handles blending. For branded templates, keeping the same reference across generations ensures that campaign visual identity stays consistent.
Seedance 2.0 isn’t limited to generating fresh footage; it can also adapt from references. Creative transitions, ad spots, film clips, and complex edits can all be guided by a reference image or video. The model reads motion, camera language, and visual structure, then recreates a similar result. You don’t need jargon; just describe what you want to reference, for example “reference @video1’s rhythm and camera moves, @image1’s character look,” and the model can generate a corresponding version.
Creative Template & Effect Replication
Golden calligraphy particle
Use reference images or videos to replicate creative styles, ad templates, and visual effects in Seedance 2.0 — no jargon needed. Includes workflow examples and a 10M-participant case study.
CapabilitiesAll examples
Related guides
Guide
Seedance 2.0 Tutorial — How to Use Text-to-Video & Image-to-Video (Step by Step)
Step-by-step Seedance 2.0 tutorial for beginners: text-to-video, image-to-video, prompt structure, settings, and your first generation on Dreamina. Updated April 2026.
Open guideGuide
Seedance 2.0 Omni-Reference & Multimodal Input — Images, Video & Audio References Explained
Seedance 2.0 Omni-Reference multimodal input: up to 9 images, 3 videos, 3 audio + text. @ tag system for referencing assets. Native audio-video joint generation.
Open guideGuide
Seedance 2.0 Use Cases — Real Examples for Ads, Film, Education & More
Seedance 2.0 use cases: e-commerce ads, TVC, product demos, film previz, MV, education, real estate, and short narrative. Based on official blog and third-party case studies.
Open guideGuide
Seedance 2.0 Shot Design Workflow — Cinema-Grade Video Prompts
Master the 5-step shot design workflow for Seedance 2.0: from requirement analysis through visual diagnosis, six-element assembly, validation, to professional delivery. Includes 28+ director presets, three-layer lighting, and multi-segment storyboarding.
Open guideGuide
AI Product Video Workflows with Seedance-Class Models — Ecommerce Angles (2026)
Maps ecommerce prompts (product hero shots, packaging reveals, studio rotation) to multimodal reference habits and QC — editorial best practices only.
Open guideGuide
Short-Form Social Video with Seedance-Style Models — Reels, Shorts, TikTok-Class Pacing (2026)
Vertical aspect ratios, hook-first prompting, and audio loudness considerations for algorithmic feeds — third-party workflow notes.
Open guideRelated capabilities

Character & Style Consistency
Consistent characters and visual style across shots.
Same character across shots; keep outfit and expression consistent.

Precise Camera & Motion Replication
Replicate complex camera moves and character actions.
Replicate camera path and motion rhythm from reference video; dolly, track, orbit.

Story & Plot Completion
AI-driven creativity and narrative completion.
Extend from opening shot and description; generate follow-up shots to complete the story.