LTX-2
LTX-2 is an open-source artificial intelligence video foundation model released by Lightricks in October 2025. It creates videos based on user prompts and was preceded by LTX Video, which was released in 2024 as the company's first text-to-video model.
LTX-2 is part of the LTX family of video generation models, which form the core technology, alongside LTX Studio, of the LTX ecosystem.
History
Origins: LTX Video (2024–2025)
In November 2024 Lightricks publicly released its first text-to-video model, LTX Video. It was a 2-billion parameter model, available as open source.In May 2025 Lightricks launched LTXV-13b, a version with 13-billion parameters. Two months later, the model broke the 60 second barrier for generated video.
Release of LTX-2 (2025)
In October 2025 Lightricks announced its latest model, and renamed it LTX-2. The model was described as capable of generating synchronized audio and video at native 4K resolution and up to 50 frames per second, using a variety of conditions and prompts, including text-to-video and image-to-video.Google highlighted the fact that LTX-2 was trained on its infrastructure, and saying it was "The first open source AI video generation model, powered by Google Cloud".
Upon its release it was ranked in the top-3 models for image-to-video creation by Artificial Analysis, behind Kling 3.5 by Kling AI and Veo 3.1 by Google. Its text-to-image option was ranked 7th.
In addition to its open-source release, Lightricks offers API access to LTX-2, allowing developers to generate videos from text and image prompts through a hosted service without running the model locally.
Technical features
Advancements over LTX Video
LTX-2 builds upon the LTX Video architecture with several major improvements:- Unified audio-video generation producing synchronized dialogue, ambience, and motion
- Native 4K rendering
- 50-fps output for cinematic motion
- Three operational modes
- More efficient diffusion pipelines enabling high fidelity on consumer GPUs
Core capabilities
- Text-to-video generation
- Image-to-video generation
- Multimodal audiovisual synthesis
- High-resolution spatial and temporal coherence
- Configurable quality/performance settings
- Open-source distribution of weights and datasets
Reception
IEA Green said that the model “could rewrite the AI filmmaking game,” emphasizing that its 50-fps rendering and unified audio-video generation made it suitable for professional studios and independent creators alike.
AI News characterized LTX-2 as a “major step forward in the democratization of cinematic-quality video generation,” praising its consumer-grade hardware efficiency and multi-tier generation modes, while also noting ongoing challenges in long-form temporal stability.
FinancialContent reported strong interest among creative agencies, attributing the attention to Lightricks’ decision to release model weights and datasets, which reviewers said enabled “a level of transparency not typically seen in commercial AI video models.”
Some early reviewers also pointed out quality limitations. The Ray3 technical review noted occasional inconsistencies in lip-sync and motion tracking during long scenes, though it stated these were “in line with the challenges faced by all current AI video diffusion models” and expected to improve with continued iteration.