AbductiveMLLM: Boosting Visual Abductive Reasoning Within MLLMs
Paper
•
2601.02771
•
Published
Use the code below to get started with the model.
model = Qwen2VLForConditionalGeneration.from_pretrained(
'path/to/your/Qwen2-VL-7B-Instruct',
torch_dtype='auto',
attn_implementation="flash_attention_2"
)
# merge lora adapter from Training Stage-1
lora_model_dir = 'qwen2vl_7b_var_lora1'
model = PeftModel.from_pretrained(model, model_id=lora_model_dir)
model = model.merge_and_unload()
# merge lora adapter from Training Stage-2
lora2_dir = 'qwen2vl_7b_var_select3_lora2'
model = PeftModel.from_pretrained(model, model_id=lora2_dir)
model = model.merge_and_unload()
model.generate()
@article{chang2026abductivemllm,
title={AbductiveMLLM: Boosting Visual Abductive Reasoning Within MLLMs},
author={Chang, Boyu and Wang, Qi and Guo, Xi and Nan, Zhixiong and Yao, Yazhou and Zhou, Tianfei},
journal={arXiv preprint arXiv:2601.02771},
year={2026}
}