ashercn97/OpenOrcaPleaseWork
Viewer β’ Updated β’ 46.9k β’ 12 β’ 1
How to use ashercn97/manatee-7b with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="ashercn97/manatee-7b") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("ashercn97/manatee-7b")
model = AutoModelForCausalLM.from_pretrained("ashercn97/manatee-7b")How to use ashercn97/manatee-7b with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "ashercn97/manatee-7b"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ashercn97/manatee-7b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/ashercn97/manatee-7b
How to use ashercn97/manatee-7b with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "ashercn97/manatee-7b" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ashercn97/manatee-7b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "ashercn97/manatee-7b" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ashercn97/manatee-7b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use ashercn97/manatee-7b with Docker Model Runner:
docker model run hf.co/ashercn97/manatee-7b
This is based on llama-2-7b and fine-tuned on two Orca datasets. It took around 6 hours on 1 L40 GPU.
Manatee is one of my first projects, so I hope you enjoy using it! To use it, you can either use it through the transformer library or if you have limited memory, you can use the GPTQ version that is on my profile!
In the future, I plan on fine-tuning higher parameter models or making a better version of Manatee-7b.