Instructions to use yhavinga/GLM-4.7-REAP-40p-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use yhavinga/GLM-4.7-REAP-40p-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="yhavinga/GLM-4.7-REAP-40p-GGUF", filename="GLM-4.7-REAP-40p-IQ3_S.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use yhavinga/GLM-4.7-REAP-40p-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S # Run inference directly in the terminal: llama-cli -hf yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S # Run inference directly in the terminal: llama-cli -hf yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S # Run inference directly in the terminal: ./llama-cli -hf yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S # Run inference directly in the terminal: ./build/bin/llama-cli -hf yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S
Use Docker
docker model run hf.co/yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S
- LM Studio
- Jan
- Ollama
How to use yhavinga/GLM-4.7-REAP-40p-GGUF with Ollama:
ollama run hf.co/yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S
- Unsloth Studio new
How to use yhavinga/GLM-4.7-REAP-40p-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for yhavinga/GLM-4.7-REAP-40p-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for yhavinga/GLM-4.7-REAP-40p-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for yhavinga/GLM-4.7-REAP-40p-GGUF to start chatting
- Pi new
How to use yhavinga/GLM-4.7-REAP-40p-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use yhavinga/GLM-4.7-REAP-40p-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S
Run Hermes
hermes
- Docker Model Runner
How to use yhavinga/GLM-4.7-REAP-40p-GGUF with Docker Model Runner:
docker model run hf.co/yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S
- Lemonade
How to use yhavinga/GLM-4.7-REAP-40p-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull yhavinga/GLM-4.7-REAP-40p-GGUF:IQ3_S
Run and chat with the model
lemonade run user.GLM-4.7-REAP-40p-GGUF-IQ3_S
List all available models
lemonade list
GLM-4.7-REAP-40p GGUF
Quantized GGUF versions of 0xSero/GLM-4.7-REAP-40, a REAP-pruned parameter Mixture-of-Experts model.
Model Details
- Base Model: zai-org/GLM-4.7 (355B total)
- Pruning: REAP (Router-Experts Activation Pruning) - 96 experts with 40% active
- Architecture: Mixture-of-Experts (MoE) with QK-RMS normalization
- Quantization: GGUF format for llama.cpp inference
Usage with llama.cpp
# Install llama.cpp (requires b7618 or later)
# Download from https://github.com/ggml-org/llama.cpp/releases
# Run inference
llama-cli -m GLM-4.7-REAP-40p-Q5_K_M.gguf \
-p "Your prompt here" \
-n 512 \
-c 4096 \
-t 8
Python API
from llama_cpp import Llama
llm = Llama(
model_path="GLM-4.7-REAP-40p-Q5_K_M.gguf",
n_ctx=32768,
n_gpu_layers=-1, # offload all to GPU
verbose=True
)
output = llm(
"Q: What is the capital of France? A:",
max_tokens=128,
stop=["Q:", "\n"],
echo=True
)
print(output['choices'][0]['text'])
Important Notes
NextN Layer Fix
The original REAP model config has num_nextn_predict_layers=1 but the actual weights don't contain a NextN layer (it was pruned away). These GGUF files have been converted with the corrected config (num_nextn_predict_layers=0), so they will load without errors.
Performance
- CPU-only: ~0.8 tokens/sec (Q3_K_S on modern CPU)
- GPU offload: Recommended for usable inference speeds
- VRAM requirements: Model size + ~2GB overhead for context
Original Model
For the original FP16 HuggingFace format, see:
License
This model inherits the license from the base model. See GLM-4 License for details.
Quantization Details
All quantizations use --output-tensor-type Q8_0 for the output layer to preserve generation quality. Quantization was performed using llama.cpp b7618.
- Conversion: HuggingFace โ GGUF FP16 โ Quantized GGUF
- Tool: llama.cpp b7618
- Platform: Google Cloud TPU v6e (spot instance)
- Downloads last month
- 51
3-bit
5-bit
6-bit
Model tree for yhavinga/GLM-4.7-REAP-40p-GGUF
Base model
0xSero/GLM-4.7-REAP-40