Instructions to use Motif-Technologies/Motif-2-12.7B-Reasoning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Motif-Technologies/Motif-2-12.7B-Reasoning with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Motif-Technologies/Motif-2-12.7B-Reasoning", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("Motif-Technologies/Motif-2-12.7B-Reasoning", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Motif-Technologies/Motif-2-12.7B-Reasoning with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Motif-Technologies/Motif-2-12.7B-Reasoning" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Motif-Technologies/Motif-2-12.7B-Reasoning", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Motif-Technologies/Motif-2-12.7B-Reasoning
- SGLang
How to use Motif-Technologies/Motif-2-12.7B-Reasoning with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Motif-Technologies/Motif-2-12.7B-Reasoning" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Motif-Technologies/Motif-2-12.7B-Reasoning", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Motif-Technologies/Motif-2-12.7B-Reasoning" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Motif-Technologies/Motif-2-12.7B-Reasoning", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Motif-Technologies/Motif-2-12.7B-Reasoning with Docker Model Runner:
docker model run hf.co/Motif-Technologies/Motif-2-12.7B-Reasoning
Last update: 10 Dec. 2025
Introduction
This is a reasoning enhanced version of Motif-2-12.7B-Instruct. Detailed information will be released later.
Evaluation
| Benchmark | Evaluation setting | Motif-2-12.7B | Motif-2-12.7B |
|---|---|---|---|
| Instruct | Reasoning | ||
| MMLU | 0-shot | 86.11 | 84.07 |
| MMLU-Redux | - | 90.02 | 88.89 |
| BBH | 0-shot | 85.78 | 78.34 |
| GPQA-Diamond | 0-shot, CoT | 63.6 | 70 |
| GSM8K | 0-shot, CoT | 96.13 | 95.53 |
| MATH | 0-shot | 97 | 95.07 |
| MBPP | 3-shot | 91 | 88.9 |
| LiveBench 2024-11-25 | - | 33.8 | 49.9 |
| IFEval | strict prompt | 75.78 | 79.11 |
| IFEval | 0-shot | 76.52 | 81.89 |
| MATH-500 | - | 96.8 | 99.3 |
| AIME24 | - | 72.3 | 88.3 |
| AIME25 | - | 63.6 | 80 |
| ZebraLogic | - | 69.5 | 77 |
| BFCL v3 | - | 55.34 | 60.2 |
| LiveCodeBench v5 (2024.10 - 2025.2) |
- | 50.03 | 65 |
| LiveCodeBench v5 | 0-shot, CoT | 61.66 | 60.1 |
| HumanEval | 0-shot | 93.2 | 93.2 |
| Average | - | 75.45 | 79.71 |
How to use in vllm
The PR adding support for the Motif model in the official vLLM package is currently under review.
In the meantime, to use our model with vLLM, please use the following container image.
Our model supports a sequence length of up to 64K tokens.
# run vllm api server
VLLM_ATTENTION_BACKEND=DIFFERENTIAL_FLASH_ATTN \
vllm serve Motif-Technologies/Motif-2-12.7B-Reasoning \
--trust-remote-code \
--max-model-len 65536 \
--tensor-parallel-size 8
# sending requests with curl
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital city of South Korea?"}
],
"temperature": 0.6
}'
How to use advanced vllm options
For maximum performance, we highly recommend using the options below.--compilation_config '{"full_cuda_graph": true}' : Activates cuda full graph capture--rope-scaling '{"rope_type":"yarn","factor":2.0,"original_max_position_embeddings":65536}': Apply yarn to support 128K context length--enable-auto-tool-choice --tool-call-parser hermes : Enables tool calling--logits-processors logit_:WrappedPerReqLogitsProcessor: Enables a ratio-based thinking budget and repetition-based auto-stop. The model is guided to think for (model_max_len - input_prompt_len) * VLLM_THINK_BUDGET_RATIO tokens, using the rest of the context window to generate the response--reasoning-parser deepseek_r1 : Parses reasoning outputs
pip install -U "huggingface_hub[cli]"
hf download Motif-Technologies/Motif-2-12.7B-Reasoning \
--include "logit_processors/*" \
--local-dir ./
export PYTHONPATH="$PWD/logit_processors"
VLLM_ATTENTION_BACKEND=DIFFERENTIAL_FLASH_ATTN \
VLLM_THINK_BUDGET_RATIO=0.95 \
vllm serve Motif-Technologies/Motif-2-12.7B-Reasoning \
--trust-remote-code \
--compilation_config '{"full_cuda_graph": true}' \
--rope-scaling '{"rope_type":"yarn","factor":2.0,"original_max_position_embeddings":65536}' \
--max-model-len 131072 \
--tensor-parallel-size 8 \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--logits-processors logit_:WrappedPerReqLogitsProcessor \
--reasoning-parser deepseek_r1
- Downloads last month
- 105
Model tree for Motif-Technologies/Motif-2-12.7B-Reasoning
Base model
Motif-Technologies/Motif-2-12.7B-Base