MLLM-Safety-Study This is the collection for CVPR 2025 paper: Do we really need curated malicious data for safety alignment in multi-modal large language models? palpit/MLLM-Safety-Study Viewer • Updated Apr 18 • 3.02k • 9 palpit/LLaVA-v1.5-7b-2000-llava-med-lora Updated Apr 27 • 16 palpit/LLaVA-v1.5-13b-2000-llava-med-lora Updated Apr 27 • 10
MLLM-Safety-Study This is the collection for CVPR 2025 paper: Do we really need curated malicious data for safety alignment in multi-modal large language models? palpit/MLLM-Safety-Study Viewer • Updated Apr 18 • 3.02k • 9 palpit/LLaVA-v1.5-7b-2000-llava-med-lora Updated Apr 27 • 16 palpit/LLaVA-v1.5-13b-2000-llava-med-lora Updated Apr 27 • 10