Instructions to use GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF", filename="Gemma3-Code-Reasoning-4B.IQ4_XS.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF:Q4_K_M
Use Docker
docker model run hf.co/GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF:Q4_K_M
- Ollama
How to use GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF with Ollama:
ollama run hf.co/GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF:Q4_K_M
- Unsloth Studio new
How to use GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF to start chatting
- Docker Model Runner
How to use GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF with Docker Model Runner:
docker model run hf.co/GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF:Q4_K_M
- Lemonade
How to use GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Gemma3-Code-Reasoning-4B-GGUF-Q4_K_M
List all available models
lemonade list
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)Gemma3-Code-Reasoning-4B-GGUF
This repository contains GGUF (GGML Universal Format) quantized versions of the GetSoloTech/Gemma3-Code-Reasoning-4B model, optimized for local inference with various quantization levels to balance performance and resource usage.
🎯 Model Overview
This is a LoRA-finetuned version of gemma-3-4b-it specifically optimized for competitive programming and code reasoning tasks. The model has been trained on the high-quality Code-Reasoning dataset to enhance its capabilities in solving complex programming problems with detailed reasoning.
🚀 Key Features
- Enhanced Code Reasoning: Specifically trained on competitive programming problems
- Thinking Capabilities: Inherits the advanced reasoning capabilities from the base model
- High-Quality Solutions: Trained on solutions with ≥85% test case pass rates
- Structured Output: Optimized for generating well-reasoned programming solutions
- Efficient Training: Uses LoRA adapters for efficient parameter updates
- Multiple Quantization Levels: Available in various GGUF formats for different hardware capabilities
📁 Available GGUF Models
| Model File | Size | Quantization | Use Case |
|---|---|---|---|
Gemma3-Code-Reasoning-4B.f16.gguf |
7.77 GB | FP16 | Highest quality, requires more VRAM |
Gemma3-Code-Reasoning-4B.Q8_0.gguf |
4.13 GB | Q8_0 | High quality, good balance |
Gemma3-Code-Reasoning-4B.Q6_K.gguf |
3.19 GB | Q6_K | Good quality, moderate VRAM usage |
Gemma3-Code-Reasoning-4B.Q5_K_M.gguf |
2.83 GB | Q5_K_M | Balanced quality and size |
Gemma3-Code-Reasoning-4B.Q4_K_M.gguf |
2.49 GB | Q4_K_M | Good compression, reasonable quality |
Gemma3-Code-Reasoning-4B.Q3_K_M.gguf |
2.1 GB | Q3_K_M | Smaller size, moderate quality |
Gemma3-Code-Reasoning-4B.Q2_K.gguf |
1.73 GB | Q2_K | Smallest size, basic quality |
Gemma3-Code-Reasoning-4B.IQ4_XS.gguf |
2.28 GB | IQ4_XS | Intel optimized, good quality |
🔧 Usage
Using with llama.cpp
# Download a GGUF model file
wget https://huggingface.co/GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF/resolve/main/Gemma3-Code-Reasoning-4B.Q4_K_M.gguf
# Run inference with llama.cpp
./llama.cpp/main -m Gemma3-Code-Reasoning-4B.Q4_K_M.gguf -n 4096 --repeat_penalty 1.1 -p "You are an expert competitive programmer. Solve this problem: [YOUR_PROBLEM_HERE]"
Using with Python (llama-cpp-python)
from llama_cpp import Llama
# Load the model
llm = Llama(
model_path="./Gemma3-Code-Reasoning-4B.Q4_K_M.gguf",
n_ctx=4096,
n_threads=4
)
# Prepare the prompt
prompt = """You are an expert competitive programmer. Read the problem and produce a correct, efficient solution. Include reasoning if helpful.
Problem: [YOUR_PROGRAMMING_PROBLEM_HERE]
Solution:"""
# Generate response
output = llm(
prompt,
max_tokens=4096,
temperature=1.0,
top_p=0.95,
top_k=64,
repeat_penalty=1.1
)
print(output['choices'][0]['text'])
🎛️ Recommended Settings
- Temperature: 1.0
- Top-p: 0.95
- Top-k: 64
- Max New Tokens: 4096 (adjust based on problem complexity)
- Repeat Penalty: 1.1
💻 Hardware Requirements
| Quantization | Minimum VRAM | Recommended VRAM | CPU RAM |
|---|---|---|---|
| FP16 | 8 GB | 12 GB | 16 GB |
| Q8_0 | 5 GB | 8 GB | 12 GB |
| Q6_K | 4 GB | 6 GB | 10 GB |
| Q5_K_M | 3 GB | 5 GB | 8 GB |
| Q4_K_M | 3 GB | 4 GB | 6 GB |
| Q3_K_M | 2 GB | 3 GB | 4 GB |
| Q2_K | 2 GB | 2 GB | 3 GB |
| IQ4_XS | 3 GB | 4 GB | 6 GB |
📈 Performance Expectations
This finetuned model is expected to show improved performance on:
- Competitive Programming Problems: Better understanding of problem constraints and requirements
- Code Generation: More accurate and efficient solutions
- Reasoning Quality: Enhanced step-by-step reasoning for complex problems
- Solution Completeness: More comprehensive solutions with proper edge case handling
🔗 Related Resources
- Base Model: GetSoloTech/Gemma3-Code-Reasoning-4B
- Training Dataset: GetSoloTech/Code-Reasoning
- Original Gemma Model: google/gemma-3-4b-it
- llama.cpp: GitHub Repository
- llama-cpp-python: PyPI Package
🤝 Contributing
This model was created using the Unsloth framework and the Code-Reasoning dataset. For questions about:
- The base model: Gemma3 Huggingface
- The training dataset: Code-Reasoning Repository
- The training framework: Unsloth Documentation
🙏 Acknowledgments
- Gemma Team for the excellent base model
- Unsloth Team for the efficient training framework
- NVIDIA Research for the original OpenCodeReasoning-2 dataset
- llama.cpp community for the GGUF format and tools
📞 Contact
For questions about this GGUF converted model, please open an issue in the repository.
Note: This model is specifically optimized for competitive programming and code reasoning tasks. Choose the appropriate quantization level based on your hardware capabilities and quality requirements.
- Downloads last month
- 76
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF
Base model
google/gemma-3-4b-pt
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="GetSoloTech/Gemma3-Code-Reasoning-4B-GGUF", filename="", )