garage-bAInd/Open-Platypus
Viewer • Updated • 24.9k • 8.92k • 416
How to use kyujinpy/PlatYi-34B-LoRA with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="kyujinpy/PlatYi-34B-LoRA") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("kyujinpy/PlatYi-34B-LoRA")
model = AutoModelForCausalLM.from_pretrained("kyujinpy/PlatYi-34B-LoRA")How to use kyujinpy/PlatYi-34B-LoRA with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "kyujinpy/PlatYi-34B-LoRA"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "kyujinpy/PlatYi-34B-LoRA",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/kyujinpy/PlatYi-34B-LoRA
How to use kyujinpy/PlatYi-34B-LoRA with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "kyujinpy/PlatYi-34B-LoRA" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "kyujinpy/PlatYi-34B-LoRA",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "kyujinpy/PlatYi-34B-LoRA" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "kyujinpy/PlatYi-34B-LoRA",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use kyujinpy/PlatYi-34B-LoRA with Docker Model Runner:
docker model run hf.co/kyujinpy/PlatYi-34B-LoRA
Model Developers Kyujin Han (kyujinpy)
Input Models input text only.
Output Models generate text only.
Model Architecture
PlatYi-34B-LoRA is an auto-regressive language model based on the Yi-34B transformer architecture.
Blog Link
Blog: [Coming soon...]
Github: [Coming soon...]
Base Model
01-ai/Yi-34B
Training Dataset
garage-bAInd/Open-Platypus.
Notice
While training, I used LoRA.
The lora_r values is 16.
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|---|---|---|---|---|---|---|---|
| PlatYi-34B-Q | 69.86 | 66.89 | 85.14 | 77.66 | 53.03 | 82.48 | 53.98 |
| PlatYi-34B-LoRA | 68.1 | 67.15 | 85.37 | 78.46 | 53.32 | 83.66 | 40.64 |
| 01-ai/Yi-34B | 69.42 | 64.59 | 85.69 | 76.35 | 56.23 | 83.03 | 50.64 |
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/PlatYi-34B-LoRA"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 68.10 |
| AI2 Reasoning Challenge (25-Shot) | 67.15 |
| HellaSwag (10-Shot) | 85.37 |
| MMLU (5-Shot) | 78.46 |
| TruthfulQA (0-shot) | 53.32 |
| Winogrande (5-shot) | 83.66 |
| GSM8k (5-shot) | 40.64 |