I'm encountering an issue while fine-tuning Llama 2 on Google Colab using a custom dataset. The code halts exactly at 51,000 examples during the training process, even though my dataset contains 61,609 examples. The strange thing is that when I tested the code with even larger datasets, it worked perfectly fine. I followed a tutorial on YouTube to fine-tune Llama 2, and you can find the original Colab and tutorial link below:
Tutorial link: YouTube Tutorial Original Colab: Google Colab
Dataset link: My Custom Dataset
Code:
!pip install -q -U trl transformers accelerate git+https://github.com/huggingface/peft.git
!pip install -q datasets bitsandbytes einops wandb
from datasets import load_dataset
from transformers import AutoTokenizer, TrainingArguments
from peft import LoraConfig, get_peft_model
from trl import SFTTrainer
# Load dataset
dataset_name = 'harpyerr/merged-pf'
dataset = load_dataset(dataset_name, split="train")
# Define model_name, lora_alpha, lora_dropout, lora_r, and other configurations
model_name = "your_pretrained_model_name" # Replace with the name of your pretrained model
lora_alpha = 16
lora_dropout = 0.1
lora_r = 64
# Initialize tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
# Define LoraConfig
peft_config = LoraConfig(
lora_alpha=lora_alpha,
lora_dropout=lora_dropout,
r=lora_r,
bias="none",
task_type="CAUSAL_LM"
)
# Define training arguments
output_dir = "./results"
per_device_train_batch_size = 4
gradient_accumulation_steps = 4
optim = "paged_adamw_32bit"
save_steps = 100
logging_steps = 10
learning_rate = 2e-4
max_grad_norm = 0.3
max_steps = 100
warmup_ratio = 0.03
lr_scheduler_type = "constant"
training_arguments = TrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=per_device_train_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
optim=optim,
save_steps=save_steps,
logging_steps=logging_steps,
learning_rate=learning_rate,
fp16=True,
max_grad_norm=max_grad_norm,
max_steps=max_steps,
warmup_ratio=warmup_ratio,
group_by_length=True,
lr_scheduler_type=lr_scheduler_type,
)
# Initialize SFTTrainer
max_seq_length = 512
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=max_seq_length,
tokenizer=tokenizer,
args=training_arguments,
)
# Convert all normalization layers to float32
import torch
for name, module in trainer.model.named_modules():
if "norm" in name:
module = module.to(torch.float32)
# Start training
trainer.train()
I tried using different datasets with larger sizes to check if the issue was specific to my custom dataset. Surprisingly, when I used other larger datasets, the code worked perfectly fine without any halting issues. So, I deduced that the problem is not with the code or the trainer but might be related to the specific characteristics of my custom dataset.