Génération de vidéos
Disponible maintenant

Wan 2.7

par Alibaba

Alibaba's flagship Wan video model. Thinking-Mode prompt reasoning, cinematic 1080p, and reference-driven consistency, on Morphic.

Text-to-videoImage-to-videoReference-to-videoVideo editing

Wan 2.7

par Alibaba

Fonctionnalités clés

Ce qui distingue Wan 2.7 des autres modèles IA

Spécifications techniques

Spécifications clés et capacités en un coup d'œil

Up to 1080p

Resolution

480p, 720p, 1080p tiers

27B MoE

Architecture

Mixture-of-experts video DiT

T2V, I2V, Ref, Edit

Modes

Four core generation paths

Reasoning

Strength

Thinking-Mode prompt planning

Cas d'utilisation

Comment les créateurs et les entreprises utilisent Wan 2.7 sur Morphic

Cinematic short-form video

Direct multi-shot scenes with consistent characters, camera language, and lighting, ad creative, brand films, narrative shorts, and stylized social content.

Character-consistent campaigns

Lock a subject from a reference image or clip, then generate multiple variations and shots without losing identity across the campaign.

Product and brand video

Use Wan 2.7's in-frame text and color control for product reveals, title cards, and ad creative that needs typography baked into the shot.

Iterative editing

Generate a base clip, then use instruction-based editing to refine wardrobe, lighting, or background without re-rolling the full generation.

Exemples de prompts

Ouvrez l'un de ces prompts pour le modifier et générer

Cinematic shot

Slow dolly forward through a fog-bound forest at dawn, golden rays cutting between trees, a lone figure ahead, cinematic, 1080p

Edit prompt
Reference-led

Generate a video of this character walking through a rain-soaked Tokyo alley, neon reflections, mid shot, cinematic

Edit prompt
Product reveal

Studio-lit rotating reveal of a sleek minimalist watch on a marble slab, brand title appearing in clean typography, 5 seconds

Edit prompt

FAQs

What is Wan 2.7?
Wan 2.7 is Alibaba's flagship Tongyi Wanxiang video model. It introduces Thinking Mode, prompt reasoning before generation, on top of a 27B-parameter mixture-of-experts architecture, with cinematic 1080p output.
What is Thinking Mode?
Thinking Mode is a planning step before generation. Wan 2.7 reasons through prompt elements, composition, and motion before producing frames, which reduces artifacts and keeps complex briefs coherent.
How is Wan 2.7 different from Wan 2.6?
2.7 adds Thinking Mode, scales the architecture, improves reference-driven consistency, and renders in-frame text and color more reliably. Use 2.6 for fast generations, 2.7 when the prompt is complex or character consistency matters.
Does Wan 2.7 support reference-driven generation?
Yes. Pass in image or video references to lock subject identity across shots, useful for multi-shot campaigns and serialized content where the same character needs to appear consistently.
How do I use Wan 2.7 on Morphic?
Open Copilot, describe the shot or attach a reference, and pick Wan 2.7. For complex multi-element scenes, brand work with text in-frame, or character-consistent sequences, Wan 2.7 is the right pick over faster but less considered models.
How does Wan 2.7 compare with Veo 3.1, Seedance 2.0, and Kling 3.0?
Wan 2.7 leads on prompt reasoning and in-frame text/color control. Veo 3.1 leads on photorealism and 4K. Seedance 2.0 leads on native audio-visual generation and human motion. Kling 3.0 leads on multi-shot director-mode control. Pick by job.

Essayer Wan 2.7 sur Morphic

Inscrivez-vous sur Morphic pour commencer à créer avec Wan 2.7. Pas de téléchargements, pas de configuration, décrivez simplement ce que vous voulez et générez.