비디오 생성
사용 가능

Wan 2.7

Alibaba 제공

Alibaba's flagship Wan video model. Thinking-Mode prompt reasoning, cinematic 1080p, and reference-driven consistency, on Morphic.

Text-to-videoImage-to-videoReference-to-videoVideo editing

Wan 2.7

Alibaba 제공

주요 기능

Wan 2.7가 다른 AI 모델과 차별화되는 점

기술 사양

주요 스펙과 기능을 한눈에 확인하세요

Up to 1080p

Resolution

480p, 720p, 1080p tiers

27B MoE

Architecture

Mixture-of-experts video DiT

T2V, I2V, Ref, Edit

Modes

Four core generation paths

Reasoning

Strength

Thinking-Mode prompt planning

활용 사례

크리에이터와 기업이 Morphic에서 Wan 2.7를 활용하는 방법

Cinematic short-form video

Direct multi-shot scenes with consistent characters, camera language, and lighting, ad creative, brand films, narrative shorts, and stylized social content.

Character-consistent campaigns

Lock a subject from a reference image or clip, then generate multiple variations and shots without losing identity across the campaign.

Product and brand video

Use Wan 2.7's in-frame text and color control for product reveals, title cards, and ad creative that needs typography baked into the shot.

Iterative editing

Generate a base clip, then use instruction-based editing to refine wardrobe, lighting, or background without re-rolling the full generation.

프롬프트 예시

프롬프트를 열어 수정하고 생성하세요

Cinematic shot

Slow dolly forward through a fog-bound forest at dawn, golden rays cutting between trees, a lone figure ahead, cinematic, 1080p

Edit prompt
Reference-led

Generate a video of this character walking through a rain-soaked Tokyo alley, neon reflections, mid shot, cinematic

Edit prompt
Product reveal

Studio-lit rotating reveal of a sleek minimalist watch on a marble slab, brand title appearing in clean typography, 5 seconds

Edit prompt

FAQs

What is Wan 2.7?
Wan 2.7 is Alibaba's flagship Tongyi Wanxiang video model. It introduces Thinking Mode, prompt reasoning before generation, on top of a 27B-parameter mixture-of-experts architecture, with cinematic 1080p output.
What is Thinking Mode?
Thinking Mode is a planning step before generation. Wan 2.7 reasons through prompt elements, composition, and motion before producing frames, which reduces artifacts and keeps complex briefs coherent.
How is Wan 2.7 different from Wan 2.6?
2.7 adds Thinking Mode, scales the architecture, improves reference-driven consistency, and renders in-frame text and color more reliably. Use 2.6 for fast generations, 2.7 when the prompt is complex or character consistency matters.
Does Wan 2.7 support reference-driven generation?
Yes. Pass in image or video references to lock subject identity across shots, useful for multi-shot campaigns and serialized content where the same character needs to appear consistently.
How do I use Wan 2.7 on Morphic?
Open Copilot, describe the shot or attach a reference, and pick Wan 2.7. For complex multi-element scenes, brand work with text in-frame, or character-consistent sequences, Wan 2.7 is the right pick over faster but less considered models.
How does Wan 2.7 compare with Veo 3.1, Seedance 2.0, and Kling 3.0?
Wan 2.7 leads on prompt reasoning and in-frame text/color control. Veo 3.1 leads on photorealism and 4K. Seedance 2.0 leads on native audio-visual generation and human motion. Kling 3.0 leads on multi-shot director-mode control. Pick by job.

Wan 2.7 를 Morphic에서 사용해보세요

Morphic에 가입하여 Wan 2.7로 크리에이션을 시작하세요. 다운로드 없이, 설정 없이, 원하는 것을 설명하고 생성하세요.