動画生成
利用可能

Wan 2.7

Alibaba 提供

Alibaba's flagship Wan video model. Thinking-Mode prompt reasoning, cinematic 1080p, and reference-driven consistency, on Morphic.

Text-to-videoImage-to-videoReference-to-videoVideo editing

Wan 2.7

Alibaba 提供

主な機能

Wan 2.7が他のAIモデルと異なる点

技術仕様

主要なスペックと機能を一覧で確認

Up to 1080p

Resolution

480p, 720p, 1080p tiers

27B MoE

Architecture

Mixture-of-experts video DiT

T2V, I2V, Ref, Edit

Modes

Four core generation paths

Reasoning

Strength

Thinking-Mode prompt planning

活用事例

クリエイターや企業がMorphicでWan 2.7を活用する方法

Cinematic short-form video

Direct multi-shot scenes with consistent characters, camera language, and lighting, ad creative, brand films, narrative shorts, and stylized social content.

Character-consistent campaigns

Lock a subject from a reference image or clip, then generate multiple variations and shots without losing identity across the campaign.

Product and brand video

Use Wan 2.7's in-frame text and color control for product reveals, title cards, and ad creative that needs typography baked into the shot.

Iterative editing

Generate a base clip, then use instruction-based editing to refine wardrobe, lighting, or background without re-rolling the full generation.

プロンプト例

プロンプトを開いて編集し、生成してください

Cinematic shot

Slow dolly forward through a fog-bound forest at dawn, golden rays cutting between trees, a lone figure ahead, cinematic, 1080p

Edit prompt
Reference-led

Generate a video of this character walking through a rain-soaked Tokyo alley, neon reflections, mid shot, cinematic

Edit prompt
Product reveal

Studio-lit rotating reveal of a sleek minimalist watch on a marble slab, brand title appearing in clean typography, 5 seconds

Edit prompt

FAQs

What is Wan 2.7?
Wan 2.7 is Alibaba's flagship Tongyi Wanxiang video model. It introduces Thinking Mode, prompt reasoning before generation, on top of a 27B-parameter mixture-of-experts architecture, with cinematic 1080p output.
What is Thinking Mode?
Thinking Mode is a planning step before generation. Wan 2.7 reasons through prompt elements, composition, and motion before producing frames, which reduces artifacts and keeps complex briefs coherent.
How is Wan 2.7 different from Wan 2.6?
2.7 adds Thinking Mode, scales the architecture, improves reference-driven consistency, and renders in-frame text and color more reliably. Use 2.6 for fast generations, 2.7 when the prompt is complex or character consistency matters.
Does Wan 2.7 support reference-driven generation?
Yes. Pass in image or video references to lock subject identity across shots, useful for multi-shot campaigns and serialized content where the same character needs to appear consistently.
How do I use Wan 2.7 on Morphic?
Open Copilot, describe the shot or attach a reference, and pick Wan 2.7. For complex multi-element scenes, brand work with text in-frame, or character-consistent sequences, Wan 2.7 is the right pick over faster but less considered models.
How does Wan 2.7 compare with Veo 3.1, Seedance 2.0, and Kling 3.0?
Wan 2.7 leads on prompt reasoning and in-frame text/color control. Veo 3.1 leads on photorealism and 4K. Seedance 2.0 leads on native audio-visual generation and human motion. Kling 3.0 leads on multi-shot director-mode control. Pick by job.

Wan 2.7 をMorphicで試す

Morphicに登録してWan 2.7でクリエイションを始めましょう。ダウンロード不要、セットアップ不要。やりたいことを説明するだけで生成できます。