Video generation
Available now

Veed Fabric 1.0

by Veed

One still, one audio file, one talking video. Veed Fabric animates any character to a voice track, on Morphic.

Image-to-talking-videoAudio-driven animationLip-sync

Veed Fabric 1.0

by Veed

Key features

What makes Veed Fabric 1.0 stand out from other AI models

Technical specifications

Key specs and capabilities at a glance

1 image + 1 audio

Inputs

JPG, PNG, WebP, GIF + MP3, WAV, M4A

480p or 720p

Resolution

Two output tiers

5 ratios

Aspect ratios

16:9, 4:3, 1:1, 3:4, 9:16

Up to 5 min

Max length

Per generation

Use cases

How creators and businesses use Veed Fabric 1.0 on Morphic

Talking-avatar social content

Generate talking-head clips for Instagram, TikTok, LinkedIn, and Shorts from a single character image and a voice track, no studio shoot required.

Ads and explainers

Use a brand character or mascot still as the speaker for ad creative, product explainers, and tutorial content with consistent visual identity across runs.

Multilingual content localization

Generate the same talking video in multiple languages by swapping the audio track, lip-sync follows the new language automatically.

Stylized character speech

Animate illustrations, anime characters, clay-style mascots, or 3D renders speaking, not just realistic portraits, useful for branded and entertainment content.

Prompt examples

Open any of these to tweak and generate

Talking-head ad

Animate this character portrait speaking the attached audio track, neutral background, mid shot, 720p, 30 seconds

Edit prompt
Stylized mascot

Make this illustrated mascot speak the attached voiceover, expressive lip and head motion, full body in frame, 60 seconds

Edit prompt
Localized explainer

Generate a talking video of this character delivering the attached Spanish voiceover, accurate lip-sync, soft natural body motion, 90 seconds

Edit prompt

FAQs

What is Veed Fabric?
Veed Fabric 1.0 is Veed's talking-video model. It takes a single still image and an audio file, and generates a video of the subject speaking the audio with synced lip, head, body, and hand motion.
How is Veed Fabric different from a normal lip-sync model?
Most lip-sync models start from existing video and re-sync the mouth to new audio. Fabric starts from a single still image and generates the entire talking video from scratch, useful when you don't have source footage of the character at all.
What styles does Veed Fabric support?
Realistic photos, illustrations, 3D renders, anime, clay-style mascots, and other stylized art. The same model handles a portrait or a cartoon character.
What languages does Veed Fabric support?
Any language. Fabric is driven by the input audio, swap the voice track and the lip-sync follows the new language, useful for content localization from one source image.
How long can a Veed Fabric clip be?
Up to 5 minutes per generation, well beyond the typical 8–20 second ceiling for talking-avatar models, long enough to cover full explainers, ads, and short tutorials in one run.
How do I use Veed Fabric on Morphic?
Open Copilot, attach a character image and a voice track, and pick Veed Fabric. For talking-head social content, ads, or multilingual explainers where you only have a still and an audio file, Fabric is the right pick.

Try Veed Fabric 1.0 on Morphic

Sign up for Morphic to start creating with Veed Fabric 1.0. No downloads, no setup, just describe what you want and generate.