视频生成
已上线

Wan 2.7

由 Alibaba 提供

Alibaba's flagship Wan video model. Thinking-Mode prompt reasoning, cinematic 1080p, and reference-driven consistency, on Morphic.

Text-to-videoImage-to-videoReference-to-videoVideo editing

Wan 2.7

由 Alibaba 提供

核心功能

Wan 2.7 与其他 AI 模型的不同之处

技术规格

关键规格和功能一览

Up to 1080p

Resolution

480p, 720p, 1080p tiers

27B MoE

Architecture

Mixture-of-experts video DiT

T2V, I2V, Ref, Edit

Modes

Four core generation paths

Reasoning

Strength

Thinking-Mode prompt planning

应用场景

创作者和企业如何在 Morphic 上使用 Wan 2.7

Cinematic short-form video

Direct multi-shot scenes with consistent characters, camera language, and lighting, ad creative, brand films, narrative shorts, and stylized social content.

Character-consistent campaigns

Lock a subject from a reference image or clip, then generate multiple variations and shots without losing identity across the campaign.

Product and brand video

Use Wan 2.7's in-frame text and color control for product reveals, title cards, and ad creative that needs typography baked into the shot.

Iterative editing

Generate a base clip, then use instruction-based editing to refine wardrobe, lighting, or background without re-rolling the full generation.

提示词示例

打开任意提示词进行编辑并生成

Cinematic shot

Slow dolly forward through a fog-bound forest at dawn, golden rays cutting between trees, a lone figure ahead, cinematic, 1080p

Edit prompt
Reference-led

Generate a video of this character walking through a rain-soaked Tokyo alley, neon reflections, mid shot, cinematic

Edit prompt
Product reveal

Studio-lit rotating reveal of a sleek minimalist watch on a marble slab, brand title appearing in clean typography, 5 seconds

Edit prompt

FAQs

What is Wan 2.7?
Wan 2.7 is Alibaba's flagship Tongyi Wanxiang video model. It introduces Thinking Mode, prompt reasoning before generation, on top of a 27B-parameter mixture-of-experts architecture, with cinematic 1080p output.
What is Thinking Mode?
Thinking Mode is a planning step before generation. Wan 2.7 reasons through prompt elements, composition, and motion before producing frames, which reduces artifacts and keeps complex briefs coherent.
How is Wan 2.7 different from Wan 2.6?
2.7 adds Thinking Mode, scales the architecture, improves reference-driven consistency, and renders in-frame text and color more reliably. Use 2.6 for fast generations, 2.7 when the prompt is complex or character consistency matters.
Does Wan 2.7 support reference-driven generation?
Yes. Pass in image or video references to lock subject identity across shots, useful for multi-shot campaigns and serialized content where the same character needs to appear consistently.
How do I use Wan 2.7 on Morphic?
Open Copilot, describe the shot or attach a reference, and pick Wan 2.7. For complex multi-element scenes, brand work with text in-frame, or character-consistent sequences, Wan 2.7 is the right pick over faster but less considered models.
How does Wan 2.7 compare with Veo 3.1, Seedance 2.0, and Kling 3.0?
Wan 2.7 leads on prompt reasoning and in-frame text/color control. Veo 3.1 leads on photorealism and 4K. Seedance 2.0 leads on native audio-visual generation and human motion. Kling 3.0 leads on multi-shot director-mode control. Pick by job.

在 Morphic 上 Wan 2.7 试用

注册 Morphic,开始使用 Wan 2.7 进行创作。无需下载,无需设置,只需描述您的想法即可生成。