Instructions to use Lightricks/LTX-2.3 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Lightricks/LTX-2.3 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Lightricks/LTX-2.3", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
LTX-2.3 temporal upscaler seems broken
Compared to the temporal upscaler when using LTX-2.0, the LTX-2.3 temporal upscaler sems broken. Using the upscaler in 2.0 works flawlessly. The 2.3 temporal upscaler softens, desaturates, and distorts/softens/changes detail when using i2v. Fidelity to the source image is basically lost with the 2.3 temporal upscaler.
Lightricks, is there any way to fix the temporal upscaler for 2.3?
Are you using the 1.1 version?
Are you using the 1.1 version?
I am not aware of a 1.1 version for the LTX-2.3 Temporal upscaler, only a 1.1 version for the Spatial upscaler. Yes, I am using the v1.1 of the Spatial upscaler
Oh right, the temporal one. 'Cause they fix something related to the spatial right, so maybe they will do something about it for the temporal. So I have always used the spatial one, are there any differences between these two?