Instructions to use multimodalart/tarsila-captioned with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use multimodalart/tarsila-captioned with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("multimodalart/tarsila-captioned") prompt = "A person in a bustling cafe in the style of tarsila do amaral" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
tarsila-captioned
Model trained with AI Toolkit by Ostris

- Prompt
- A person in a bustling cafe in the style of tarsila do amaral

- Prompt
- A mecha robot in a favela in the style of tarsila do amaral
Trigger words
You should use in the style of tarsila do amaral to trigger the image generation.
Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
Use it with the 🧨 diffusers library
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('multimodalart/tarsila-captioned', weight_name='tarsila-captioned')
image = pipeline('A person in a bustling cafe in the style of tarsila do amaral').images[0]
image.save("my_image.png")
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
- Downloads last month
- 12
Model tree for multimodalart/tarsila-captioned
Base model
black-forest-labs/FLUX.1-dev