Training Qwen2.5-3B-Instruct for Evaluation Agent with CoT Reasoning
This repository contains scripts and configurations for training Qwen2.5-3B-Instruct model on evaluation agent data with Chain-of-Thought (CoT) reasoning format.
Overview
The training pipeline processes evaluation results from:
- VBench: Video quality evaluation results
- T2I-CompBench: Text-to-image composition evaluation results
- Open Domain: Open-ended query evaluation results
All results are in CoT (Chain-of-Thought) reasoning format from proprietary models.
Dataset Preparation
1. Data Cleaning and Conversion
Run the data cleaning script to convert raw evaluation results into LLaMA-Factory format:
python clean_and_convert_data.py
This script:
- Processes JSON files from
ea-data/agent/subdirectories - Converts CoT-style evaluation results into instruction-response pairs
- Outputs to
LLaMA-Factory/data/evaluation_agent_cot_dataset.json - Updates
LLaMA-Factory/data/dataset_info.jsonwith dataset metadata
Dataset Statistics
- Total training examples: ~860 (from initial processing)
- Format: Alpaca-style (instruction, input, output)
Training Configurations
1. LoRA Fine-tuning (Recommended)
Configuration: train_qwen2.5_eval_agent.yaml
Key parameters:
- Model: Qwen/Qwen2.5-3B-Instruct
- Method: LoRA (rank=16, alpha=32)
- Batch size: 2 per device × 4 gradient accumulation
- Learning rate: 5e-5 with cosine scheduler
- Epochs: 3
- Memory requirement: ~16GB VRAM
2. Full Fine-tuning
Configuration: train_qwen2.5_eval_agent_full.yaml
Key parameters:
- Model: Qwen/Qwen2.5-3B-Instruct
- Method: Full fine-tuning with DeepSpeed
- Gradient checkpointing enabled
- Memory requirement: ~32GB+ VRAM
Training Execution
Quick Start
# Make script executable
chmod +x train_qwen2.5_eval_agent.sh
# Run training
./train_qwen2.5_eval_agent.sh
Manual Training
cd LLaMA-Factory
llamafactory-cli train ../train_qwen2.5_eval_agent.yaml
Distributed Training
For multi-GPU training:
CUDA_VISIBLE_DEVICES=0,1,2,3 \
torchrun --nproc_per_node 4 \
--master_port 29500 \
src/train.py ../train_qwen2.5_eval_agent.yaml
Inference
After training, run inference with:
llamafactory-cli chat ../inference_qwen2.5_eval_agent.yaml
Or use the API:
llamafactory-cli api ../inference_qwen2.5_eval_agent.yaml
Model Merging
To merge LoRA weights with base model:
llamafactory-cli export \
--model_name_or_path Qwen/Qwen2.5-3B-Instruct \
--adapter_name_or_path saves/qwen2.5-3b/lora/eval_agent_cot \
--template qwen \
--finetuning_type lora \
--export_dir models/qwen2.5-3b-eval-agent-merged \
--export_size 4 \
--export_legacy_format false
Monitoring Training
TensorBoard
tensorboard --logdir saves/qwen2.5-3b/lora/eval_agent_cot
Loss Plots
Training loss plots are automatically saved to the output directory.
Evaluation
The model will be evaluated on:
- CoT reasoning quality
- Evaluation accuracy
- Response coherence
- Format consistency
Directory Structure
evaluation_agent_dev/
├── ea-data/agent/ # Raw evaluation data
│ ├── vbench_results/
│ ├── t2i_results/
│ └── open_results/
├── LLaMA-Factory/ # Training framework
│ └── data/
│ ├── evaluation_agent_cot_dataset.json # Processed dataset
│ └── dataset_info.json
├── clean_and_convert_data.py # Data processing script
├── train_qwen2.5_eval_agent.yaml # LoRA training config
├── train_qwen2.5_eval_agent_full.yaml # Full training config
├── inference_qwen2.5_eval_agent.yaml # Inference config
└── train_qwen2.5_eval_agent.sh # Training script
Requirements
- Python 3.9+
- PyTorch 2.0+
- CUDA 11.6+
- LLaMA-Factory (installed)
- 16GB+ VRAM for LoRA, 32GB+ for full fine-tuning
Tips
- Memory Management: Use gradient checkpointing and DeepSpeed for larger batch sizes
- Learning Rate: Start with 5e-5 for LoRA, 2e-5 for full fine-tuning
- Data Quality: Review generated dataset for quality before training
- Checkpointing: Save checkpoints frequently (every 200 steps)
- Mixed Precision: Use bf16 for faster training and lower memory usage
Troubleshooting
- OOM Errors: Reduce batch size or enable gradient checkpointing
- Slow Training: Enable Flash Attention 2 if available
- Poor Results: Increase training epochs or adjust learning rate
- Data Issues: Check JSON parsing in data cleaning script
Next Steps
- Expand dataset with more evaluation examples
- Implement custom evaluation metrics
- Fine-tune on specific evaluation dimensions
- Deploy model for production use
License
Follow the licenses of:
- Qwen2.5 model
- LLaMA-Factory framework
- Original evaluation datasets