Qwen3-0.6B_essobi-lima-2026-01-28-v1
This model was fine-tuned from unsloth/Qwen3-0.6B.
Training Details
- Base Model: unsloth/Qwen3-0.6B
- Training Method: LoRA (PEFT)
- Epochs: 3
- Learning Rate: 1e-5
- LoRA Rank: 128
- LoRA Alpha: 128
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("essobi/Qwen3-0.6B_essobi-lima-2026-01-28-v1")
tokenizer = AutoTokenizer.from_pretrained("essobi/Qwen3-0.6B_essobi-lima-2026-01-28-v1")
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support