Sampling in AI image and video generation refers to the algorithmic process of generating an output from a trained model by iteratively drawing values from the model's probability distributions during the denoising process. Different sampling methods, often called samplers or schedulers, take different mathematical approaches to navigating the denoising steps, producing different trade-offs between generation speed, output quality, and the character of the results.
Common sampling methods include DDIM (Denoising Diffusion Implicit Models), which enables faster generation with fewer steps by making the denoising process deterministic; Euler and Euler Ancestral, which offer a balance between speed and quality; DPM++ variants that are popular for their efficiency at lower step counts; and DDPM, the original approach that produces high-quality results but requires more steps. The number of sampling steps is a key parameter: more steps generally produce more refined, coherent outputs at the cost of longer generation time, while fewer steps generate faster but can produce rougher results. The relationship between sampler choice and step count is model-specific, and different combinations produce noticeably different output characteristics even with identical prompts and seeds.
For most creators using AI generation platforms, sampling parameters are either abstracted away behind quality presets or exposed as advanced settings for those who want fine-grained control. Understanding that the sampler and step count influence output quality and character helps creators troubleshoot inconsistent results and make informed choices when platforms expose these controls.