A groundbreaking 13B-parameter AI model by Lightricks, revolutionizing video creation with unprecedented speed and quality. 30x faster than comparable models, powered by advanced multiscale rendering technology.
LTXV-13B represents a significant evolution from its predecessor, the LTX Video model, with a notable increase in parameters from 2 billion to 13 billion. Released in early May 2025, this model was developed by Lightricks in response to advancements by competitors like OpenAI and Meta.
The model builds upon the DiT-based architecture, introducing groundbreaking features like multiscale rendering and improved motion quality. This evolution enables real-time video generation at high resolutions while maintaining exceptional quality.
LTXV-13B is an advanced AI video generation model developed by Lightricks, featuring 13 billion parameters. It represents a significant upgrade from its predecessor, offering high-quality video generation with unprecedented speed and efficiency.
Key features include multiscale rendering technology, improved prompt adherence, real-time generation at 1216×704 resolution (30 FPS), and support for various video generation modes including text-to-video and image-to-video transformations.
The model runs efficiently on consumer hardware like NVIDIA 4090 or 5090 GPUs. The full version requires 8GB+ VRAM, while a quantized version (ltxv-13b-fp8) is available for systems with less VRAM.
LTXV-13B generates videos 30 times faster than comparable models, thanks to its multiscale rendering technology and kernel optimization. It achieves real-time performance while maintaining high quality.
The model supports text-to-video, image-to-video, keyframe-based animation, video extension, and video-to-video transformations. It can also combine these modes for complex video generation tasks.
Yes, LTXV-13B is available under the LTXV Open Weights License. The model and its tools are open source, allowing for community development and customization.
The ecosystem includes LTX-Video-Trainer for fine-tuning, ComfyUI integration with example workflows, and support for creating custom LoRAs. All tools are available on GitHub.
Multiscale rendering first drafts videos in lower detail to capture coarse motion, then refines the details. This approach enhances both speed and quality of the generated videos.
Version 0.9.7 includes improved prompt adherence, enhanced motion quality, better fine details, and support for stochastic inference in the distilled model.
The model is available on Hugging Face and GitHub. Comprehensive documentation, example workflows, and community resources are available through these platforms.
Join the future of video generation with LTXV 13B. Available on Hugging Face and GitHub.