Open-source AI video from Lightricks offers 4K, sound, and faster rendering
Lightricks is upping the ante for rapid video creation and iteration with its latest artificial intelligence model. The company claims its newly released LTX-2 foundation model can generate new content faster than playback speed, plus it raises the bar in resolution and quality.
The open-source LTX-2 can generate a stylised, high-definition, six-second video in just five seconds without no compromise in quality, enabling creators to pump out professional content much faster than previously.
It’s an impressive achievement, but it’s not the only parameter that sets LTX-2 apart from others. It combines native audio and video synthesis with open-source transparency, and if users are willing to wait just a few seconds longer, they can enhance their outputs to 4K resolution at up to 48 frames per second, the company says. Even better, creators can run the software on consumer-grade GPUs, dramatically reducing their compute costs.
Diffusion models come of age
LTX-2 is what’s known as a diffusion model, which works by incrementally adding “noise” to generated content and then reducing that noise until the output resembles the video assets the model has been trained on.
With LTX-2, Lightricks has accelerated the diffusion process, so creators can iterate on their ideas by outputting live previews almost instantaneously. The model is also capable of generating accompanying audio at the same time – be it a soundtrack, dialogue or ambient sound effects – dramatically accelerating creative workflows.
That’s a big deal, as before, creators would have had to conjure up any audio separately from the video, then spend time stitching it together and making sure there’s perfect synchronisation. Google’s Veo models have been celebrated for their powerful integration of synced sound generation, so these new capabilities in LTX serve to reinforce the idea that Lightricks’ tech is on par with the bleeding edge.
When it comes to access options, Lightricks still offers creators plenty of flexibility with LTX-2. The company’s flagship LTX Studio platform is aimed at professionals, who, in some cases, are willing to sacrifice some speed to create videos at the highest quality. With the ensuing slightly slower rates of processing, they’ll be able to output videos in native 4K resolution at up to 48fps, creating at the same standard expected from cinematic productions, Lightricks claims.
The platform offers a wide range of creative controls, affecting the model’s customisable parameters. More details on these will be announced soon, but should include pose and depth controls, video-to-video generation, and rendering alternatives – keep an eye out for a release date, later this autumn.
Lightricks co-founder and Chief Executive Zeev Farbman believes that LTX-2’s enhanced capabilities illustrate the extent to which diffusion models are finally coming of age. He said in a statement that LTX-2 is: “The most complete and comprehensive creative AI engine we’ve ever built, combining synchronised audio and video, 4K fidelity, flexible workflows, and radical efficiency.”
“The isn’t vaporware or a research demo,” he said. “It’s a real breakthrough in video generation.”
A major milestone
With LTX-2, Lightricks is demonstrating it’s at the cutting edge of AI video generation, with the platform coming on the back of a number of industry firsts in previous LTXV models.
In July, the company’s family of LTXV models, including LTXV-2B and LTXV-13B, became the first to support long-form video generation, which followed an update extending output to up to 60 seconds. With this, AI video production became “truly directed,” with users able to start with an initial prompt, and add further prompts in real-time as video was being streamed live.
LTXV-13B already had a reputation for being one of the most powerful video creation models around, even before that one minute update. Launching in May, it was the first platform in the industry to support multi-scale rendering, which let users progressively enhance their videos by prompting the model to add more colour and detail, step-by-step, in the same way that professional animators “layer” additional details on top of their work in traditional production processes.
The 13B model was trained on licensed data from Getty and Shutterstock. The company’s partnerships with these content behemoths are important, not only for the quality of the training data, but also for ethical reasons; models’ outputs are far less problematic in terms of copyright, an issue that plagues many other AI models’ creations.
Lightricks has also released a distilled version of LTXV-13B that simplifies and speeds up the diffusion process, meaning content can be generated in as little as four-to-eight steps. The distilled version also supports LoRAs, meaning it can be fine-tuned by users to create content that’s more attuned to the aesthetic style of a project.
Innovative billing models
Like those earlier models, LTX-2 will be released under an open-source licence, making it a viable alternative to Alibaba’s Wan2 series of models. Lightricks has stressed that it’s truly open-source, as opposed to just “open access,” which means that its pre-trained weights, datasets, and all tooling will be available on GitHub, alongside the model itself.
LTX-2 is available to users in LTX Studio and through its API as of now, with the open-source version due to be released in November.
For those who prefer to use the paid version via API, Lightricks offers flexible pricing, with costs starting at just $0.04 per second for a version that generates HD videos in just five seconds. The Pro version balances speed with performance, and here, prices start at $0.07 per second. The Ultra version costs $0.12 per second for video generation in 4K resolution at 48 fps, plus full-fidelity audio. Prices also vary according to resolution, with users able to choose between 720p, 1080p, 2K and 4K.
Lightricks claims that thanks to the efficiency of the model’s processing, its pricing makes LTX-2 up to 50% cheaper than competing models, making extended projects more economically viable, yet with faster iteration and higher quality than previous generations. Alternatively, users will be able to use the model by downloading the open-source version and running it on consumer-grade GPUs after it lands on GitHub next month.
Image source: Unsplash
The post Open-source AI video from Lightricks offers 4K, sound, and faster rendering appeared first on AI News.
Origianl Creator: TechForge
Original Link: https://www.artificialintelligence-news.com/news/open-source-ai-video-from-lightricks-offers-4k-sound-and-faster-rendering/
Originally Posted: Fri, 24 Oct 2025 07:00:00 +0000












What do you think?
It is nice to know your opinion. Leave a comment.