Image & video reference
Reference @image1, @image2, @video1 for emotion and expression performance.
Better emotional performance and expression.
Last updated:
AI characters often look flat or fake—emotion doesn’t land. Seedance 2.0 improves micro-expressions and body language so characters can show joy, sadness, surprise, or anger in a more natural way; eyes, mouth, and gesture all read better. If you want AI characters with real “acting,” 2.0 gets you there.
These pages are written as third-party reference summaries rather than official product documentation.
Capability descriptions summarize public Seedance 2.0 launch materials, public project pages, and other publicly accessible explanatory write-ups.
This site does not represent Seedance, official product support, or any authorized partnership unless a page explicitly states that with documented basis.
Platform access, supported features, pricing, UI, and availability can change. Use official or primary sources for current information.

Emotional performance is improved for more expressive and nuanced character delivery. How it works: the model generates characters whose facial micro-expressions (eye movement, mouth shape, brow position) and body language (gesture, posture, movement intensity) respond to the emotional context described in your prompt. Instead of flat, neutral faces, characters can express joy, sadness, surprise, anger, or subtler states like hesitation or curiosity. When to use this: virtual-human and VTuber content where emotional connection drives audience engagement; brand storytelling where character emotion carries the narrative; educational content where an instructor character needs to appear warm and approachable; social-media shorts where expressive characters boost watch-time and shares. Tips and practical notes: be specific about the emotion and its intensity in your prompt — 'gentle smile with a slight head tilt' produces more nuanced results than just 'happy.' For dialogue scenes, pair emotion expression with native audio to get synchronized vocal tone and facial expression. The model handles transitions between emotions within a single shot (e.g., 'starts surprised then breaks into laughter'), which is useful for short narrative content. Combine with character consistency to ensure the same character maintains its identity while expressing different emotions across shots.
Traditional virtual avatars have rigid expressions and monotonous emotions, making it difficult to establish emotional connections with viewers; interaction retention rate is low
Used emotion expression controls so AI virtual humans could show richer expressions and body language during live content.
Public case recaps cite livestream interaction rising 180%, average viewing time tripling, follower count exceeding 1 million, and endorsement revenue growing 5x.
Reading note:Micro-expression and body-language controls made the performances easier to read.
Illustrative cases on this site are compiled from public campaign recaps and secondary reporting available at the time of writing.
Metrics reflect the reported campaign period and should not be treated as current performance benchmarks.
Brand names and figures are cited for explanatory use only, not as endorsements, guarantees, or independently audited results.

Expressions, emotions, body language, transformation effects.
Reference @image1, @image2, @video1 for emotion and expression performance.
Range hood ad, @image1 as first frame, woman cooks elegantly, no smoke. Pan right to @image2 man sweating, red-faced, heavy smoke. Pan left to @image1 range hood on counter, reference @image3, hood sucking smoke.
@image1 as first frame, camera rotates and pushes in, character looks up suddenly, face reference @image2, roars loudly with comedic energy, expression reference @image3. Then body transforms into bear, reference @image4.
Reference images

Reference images 1: Roar & transform to bear

Reference images 2: Roar & transform to bear

Reference images 3: Roar & transform to bear

Reference images 4: Roar & transform to bear
Seedance 2.0 improves micro-expressions and body language so characters can show joy, sadness, surprise, or anger more naturally. Eyes, mouth, and gesture all read better for more believable 'acting.'
Yes. It can be useful for virtual idols, VTubers, and other AI characters that need richer expressions and body language in live or recorded content.
Yes. The model handles emotional transitions within a single generation — for example, 'starts surprised then breaks into laughter.' This is useful for short narrative content where emotional shifts drive the story.
More specific is better. 'Gentle smile with a slight head tilt' produces more nuanced results than just 'happy.' Describe both the emotion and physical expression for the best results.
Related guides
These guides add workflow, prompt, and use-case context around this capability so the page connects into the broader Seedance topic cluster.
Guide
Write better Seedance 2.0 prompts: subject + action + camera + style formulas, @ reference tags, and practical before/after tips for text-to-video and image-to-video workflows.
Open guideGuide
Seedance 2.0 use cases: e-commerce ads, TVC, product demos, film previz, MV, education, real estate, and short narrative. Based on official blog and third-party case studies.
Open guideGuide
Best practices for Seedance 2.0: prompt formulas, reference assets, camera and motion wording, and quality checks. Based on public guides.
Open guideGuide
Field notes from eight consecutive generations of a Japanese anime school-uniform-to-Greek-goddess transformation: matching prompts to clips, what changes between runs, and what fast re-generation is useful for.
Open guideGuide
Master the six-dimension prompt formula for stable AI video: subject, action, scene boundaries, camera, lighting, and timing. Includes full examples.
Open guide