视频生成
已上线

Veed Fabric 1.0

由 Veed 提供

One still, one audio file, one talking video. Veed Fabric animates any character to a voice track, on Morphic.

Image-to-talking-videoAudio-driven animationLip-sync

Veed Fabric 1.0

由 Veed 提供

核心功能

Veed Fabric 1.0 与其他 AI 模型的不同之处

技术规格

关键规格和功能一览

1 image + 1 audio

Inputs

JPG, PNG, WebP, GIF + MP3, WAV, M4A

480p or 720p

Resolution

Two output tiers

5 ratios

Aspect ratios

16:9, 4:3, 1:1, 3:4, 9:16

Up to 5 min

Max length

Per generation

应用场景

创作者和企业如何在 Morphic 上使用 Veed Fabric 1.0

Talking-avatar social content

Generate talking-head clips for Instagram, TikTok, LinkedIn, and Shorts from a single character image and a voice track, no studio shoot required.

Ads and explainers

Use a brand character or mascot still as the speaker for ad creative, product explainers, and tutorial content with consistent visual identity across runs.

Multilingual content localization

Generate the same talking video in multiple languages by swapping the audio track, lip-sync follows the new language automatically.

Stylized character speech

Animate illustrations, anime characters, clay-style mascots, or 3D renders speaking, not just realistic portraits, useful for branded and entertainment content.

提示词示例

打开任意提示词进行编辑并生成

Talking-head ad

Animate this character portrait speaking the attached audio track, neutral background, mid shot, 720p, 30 seconds

Edit prompt
Stylized mascot

Make this illustrated mascot speak the attached voiceover, expressive lip and head motion, full body in frame, 60 seconds

Edit prompt
Localized explainer

Generate a talking video of this character delivering the attached Spanish voiceover, accurate lip-sync, soft natural body motion, 90 seconds

Edit prompt

FAQs

What is Veed Fabric?
Veed Fabric 1.0 is Veed's talking-video model. It takes a single still image and an audio file, and generates a video of the subject speaking the audio with synced lip, head, body, and hand motion.
How is Veed Fabric different from a normal lip-sync model?
Most lip-sync models start from existing video and re-sync the mouth to new audio. Fabric starts from a single still image and generates the entire talking video from scratch, useful when you don't have source footage of the character at all.
What styles does Veed Fabric support?
Realistic photos, illustrations, 3D renders, anime, clay-style mascots, and other stylized art. The same model handles a portrait or a cartoon character.
What languages does Veed Fabric support?
Any language. Fabric is driven by the input audio, swap the voice track and the lip-sync follows the new language, useful for content localization from one source image.
How long can a Veed Fabric clip be?
Up to 5 minutes per generation, well beyond the typical 8–20 second ceiling for talking-avatar models, long enough to cover full explainers, ads, and short tutorials in one run.
How do I use Veed Fabric on Morphic?
Open Copilot, attach a character image and a voice track, and pick Veed Fabric. For talking-head social content, ads, or multilingual explainers where you only have a still and an audio file, Fabric is the right pick.

在 Morphic 上 Veed Fabric 1.0 试用

注册 Morphic,开始使用 Veed Fabric 1.0 进行创作。无需下载,无需设置,只需描述您的想法即可生成。