See axolotl config
axolotl version: 0.10.0
# ---------------- 核心模型路径 ----------------
base_model: /nfs/data/johnsonk/mllm_models/RAFT/Oct2/lr1e-5_epoch4_warm0.03_training_6k_GPU_0_1_2_3_20251001_172635/trained_model
# output_dir: /work/nvme/bckr/wli18/RAFT/exp_1
dataset_prepared_path: last_run_prepared
# --------------- 数据集 ----------------------
datasets:
# 训练集
- path: /home/johnsonk/RAFT/1exp/exp_1.0.2_questionize_high_school_physics_861_img/result/image_captions_2_sft_data_format_shuffled.jsonl
type: chat_template
split: train
field_messages: messages
message_property_mappings:
role: role
content: content
val_set_size: 0.05
# --------------- 训练超参 ---------------------
sequence_len: 2048
micro_batch_size: 1
gradient_accumulation_steps: 8
num_epochs: 3
learning_rate: 1e-5
optimizer: adamw_torch
lr_scheduler: cosine
warmup_ratio: 0.03
weight_decay: 0.01
# max_steps: 100 # <--- 新增此行,强制运行100步
# --------------- 精度 / 显存 ------------------
bf16: auto
tf32: true
gradient_checkpointing: true
# gradient_checkpointing_kwargs:
# use_reentrant: false
# --------------- DeepSpeed --------------------
deepspeed: /home/johnsonk/RAFT/configs/deepspeed_stage3.json
# ----------- 关闭所有 LoRA / 量化 -------------
# (配置已为全参数训练)
# load_in_4bit: true
# adapter: qlora
# lora_r: 16
# lora_alpha: 32
# lora_target_modules:
# - q_proj
# - k_proj
# - v_proj
# - o_proj
# - down_proj
# - up_proj
# lora_mlp_kernel: true
# lora_qkv_kernel: true
# lora_o_kernel: true
# -------------- 其它杂项 ----------------------
strict: false
chat_template: qwen_25
save_steps: 100000 # 每50个step保存一次
# [关键] 设置最多只保留2个最新的checkpoint,自动删除旧的
save_total_limit: 0
evals_per_epoch: 1
# saves_per_epoch: 100
logging_steps: 1
flash_attention: true
### WandB ###
report_to: wandb
wandb_project: test_axolotl
wandb_entity: johnson0213-ucla
wandb_name: Qwen2.5-Coder-7B-TikZ-SFT-860-v1-3epoch-grad_acc8-lr1e-5
nfs/data/johnsonk/mllm_models/RAFT/Nov15sftPhase/Nov15_sftPhase_lr1e-5_ep3_20251115_051615/trained_model
This model was trained from scratch on the /home/johnsonk/RAFT/1exp/exp_1.0.2_questionize_high_school_physics_861_img/result/image_captions_2_sft_data_format_shuffled.jsonl dataset. It achieves the following results on the evaluation set:
- Loss: 0.4049
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- training_steps: 67
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| No log | 0 | 0 | 1.0554 |
| 0.4585 | 1.0 | 23 | 0.4498 |
| 0.4266 | 2.0 | 46 | 0.4049 |
Framework versions
- Transformers 4.52.3
- Pytorch 2.5.1
- Datasets 3.6.0
- Tokenizers 0.21.2
- Downloads last month
- 10