Video generation
Available now

Wan 2.7

by Alibaba

Alibaba's flagship Wan video model. Thinking-Mode prompt reasoning, cinematic 1080p, and reference-driven consistency, on Morphic.

Text-to-videoImage-to-videoReference-to-videoVideo editing

Wan 2.7

by Alibaba

Key features

What makes Wan 2.7 stand out from other AI models

Technical specifications

Key specs and capabilities at a glance

Up to 1080p

Resolution

480p, 720p, 1080p tiers

27B MoE

Architecture

Mixture-of-experts video DiT

T2V, I2V, Ref, Edit

Modes

Four core generation paths

Reasoning

Strength

Thinking-Mode prompt planning

Use cases

How creators and businesses use Wan 2.7 on Morphic

Cinematic short-form video

Direct multi-shot scenes with consistent characters, camera language, and lighting, ad creative, brand films, narrative shorts, and stylized social content.

Character-consistent campaigns

Lock a subject from a reference image or clip, then generate multiple variations and shots without losing identity across the campaign.

Product and brand video

Use Wan 2.7's in-frame text and color control for product reveals, title cards, and ad creative that needs typography baked into the shot.

Iterative editing

Generate a base clip, then use instruction-based editing to refine wardrobe, lighting, or background without re-rolling the full generation.

Prompt examples

Open any of these to tweak and generate

Cinematic shot

Slow dolly forward through a fog-bound forest at dawn, golden rays cutting between trees, a lone figure ahead, cinematic, 1080p

Edit prompt
Reference-led

Generate a video of this character walking through a rain-soaked Tokyo alley, neon reflections, mid shot, cinematic

Edit prompt
Product reveal

Studio-lit rotating reveal of a sleek minimalist watch on a marble slab, brand title appearing in clean typography, 5 seconds

Edit prompt

FAQs

What is Wan 2.7?
Wan 2.7 is Alibaba's flagship Tongyi Wanxiang video model. It introduces Thinking Mode, prompt reasoning before generation, on top of a 27B-parameter mixture-of-experts architecture, with cinematic 1080p output.
What is Thinking Mode?
Thinking Mode is a planning step before generation. Wan 2.7 reasons through prompt elements, composition, and motion before producing frames, which reduces artifacts and keeps complex briefs coherent.
How is Wan 2.7 different from Wan 2.6?
2.7 adds Thinking Mode, scales the architecture, improves reference-driven consistency, and renders in-frame text and color more reliably. Use 2.6 for fast generations, 2.7 when the prompt is complex or character consistency matters.
Does Wan 2.7 support reference-driven generation?
Yes. Pass in image or video references to lock subject identity across shots, useful for multi-shot campaigns and serialized content where the same character needs to appear consistently.
How do I use Wan 2.7 on Morphic?
Open Copilot, describe the shot or attach a reference, and pick Wan 2.7. For complex multi-element scenes, brand work with text in-frame, or character-consistent sequences, Wan 2.7 is the right pick over faster but less considered models.
How does Wan 2.7 compare with Veo 3.1, Seedance 2.0, and Kling 3.0?
Wan 2.7 leads on prompt reasoning and in-frame text/color control. Veo 3.1 leads on photorealism and 4K. Seedance 2.0 leads on native audio-visual generation and human motion. Kling 3.0 leads on multi-shot director-mode control. Pick by job.

Try Wan 2.7 on Morphic

Sign up for Morphic to start creating with Wan 2.7. No downloads, no setup, just describe what you want and generate.