Dual character fight with orbital camera (multi-video)
Precise Camera & Motion Replication
Reference @video1 character actions, @video2 orbital camera movement, generate fight between character1 and character2, fight in starry night, white dust rises, spectacular combat, tense atmosphere.
Cited / reported data
Use reference videos to reproduce camera language, action rhythm, and motion patterns with closer alignment. How it works: upload a reference video as @video1 in your prompt. The model analyzes the camera trajectory — including speed changes, focal-length shifts, and framing transitions — then generates new footage that follows the same path with your chosen subject and scene. You describe the scene; the reference video supplies the camera work. When to use this: recreating a signature camera move (dolly zoom, Hitchcock zoom, tracking shot) without expensive equipment; matching the visual rhythm of an existing ad or film scene; producing consistent camera language across a campaign; replicating event-coverage-style handheld movement for a more documentary feel. Tips and practical notes: shorter reference clips (3–8 seconds) with a single clear camera move tend to replicate more accurately than longer clips with multiple cut points. If your reference has both camera movement and subject action, the model prioritizes camera path — add explicit subject-action instructions in your text prompt. Combining camera-motion replication with character consistency lets you keep the same actor across scenes while varying the cinematography.
Getting the model to copy blocking, camera moves, or complex action from a film used to mean writing long prompt descriptions—or it simply didn’t work. Now you just upload a reference video.
Precise Camera & Motion Replication
Dual character fight with orbital camera (multi-video)
Upload a reference video and Seedance 2.0 replicates its camera path — dolly, tracking, orbit, Hitchcock zoom, and more. See how with workflow tips and a marketing film case study.
CapabilitiesAll examples
Related guides
Guide
Seedance 2.0 Tutorial — How to Use Text-to-Video & Image-to-Video (Step by Step)
Step-by-step Seedance 2.0 tutorial for beginners: text-to-video, image-to-video, prompt structure, settings, and your first generation on Dreamina. Updated March 2026.
Open guideGuide
Seedance 2.0 Prompt Writing Tips — How to Write Better Video Prompts
Write better Seedance 2.0 prompts: subject + action + camera + style formulas, @ reference tags, and practical before/after tips for text-to-video and image-to-video workflows.
Open guideGuide
Seedance 2.0 Omni-Reference & Multimodal Input — Images, Video & Audio References Explained
Seedance 2.0 Omni-Reference multimodal input: up to 9 images, 3 videos, 3 audio + text. @ tag system for referencing assets. Native audio-video joint generation.
Open guideGuide
Seedance 2.0 Shot Design Workflow — Cinema-Grade Video Prompts
Master the 5-step shot design workflow for Seedance 2.0: from requirement analysis through visual diagnosis, six-element assembly, validation, to professional delivery. Includes 28+ director presets, three-layer lighting, and multi-segment storyboarding.
Open guideRelated capabilities

Character & Style Consistency
Consistent characters and visual style across shots.
Same character across shots; keep outfit and expression consistent.

Creative Template & Effect Replication
Replicate creative templates and complex visual effects.
Replicate cyberpunk, vintage film, or other styles from reference image/video.

Video Editing
Character replacement, trimming, and additions.
Inpainting, character swap, background extend, add/remove elements; no reshoot.