Instructions to use ReallyFloppyPenguin/Arch-Agent-7B-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use ReallyFloppyPenguin/Arch-Agent-7B-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="ReallyFloppyPenguin/Arch-Agent-7B-GGUF", filename="model-f16.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use ReallyFloppyPenguin/Arch-Agent-7B-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ReallyFloppyPenguin/Arch-Agent-7B-GGUF:F16 # Run inference directly in the terminal: llama-cli -hf ReallyFloppyPenguin/Arch-Agent-7B-GGUF:F16
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ReallyFloppyPenguin/Arch-Agent-7B-GGUF:F16 # Run inference directly in the terminal: llama-cli -hf ReallyFloppyPenguin/Arch-Agent-7B-GGUF:F16
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf ReallyFloppyPenguin/Arch-Agent-7B-GGUF:F16 # Run inference directly in the terminal: ./llama-cli -hf ReallyFloppyPenguin/Arch-Agent-7B-GGUF:F16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf ReallyFloppyPenguin/Arch-Agent-7B-GGUF:F16 # Run inference directly in the terminal: ./build/bin/llama-cli -hf ReallyFloppyPenguin/Arch-Agent-7B-GGUF:F16
Use Docker
docker model run hf.co/ReallyFloppyPenguin/Arch-Agent-7B-GGUF:F16
- LM Studio
- Jan
- Ollama
How to use ReallyFloppyPenguin/Arch-Agent-7B-GGUF with Ollama:
ollama run hf.co/ReallyFloppyPenguin/Arch-Agent-7B-GGUF:F16
- Unsloth Studio new
How to use ReallyFloppyPenguin/Arch-Agent-7B-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ReallyFloppyPenguin/Arch-Agent-7B-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ReallyFloppyPenguin/Arch-Agent-7B-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for ReallyFloppyPenguin/Arch-Agent-7B-GGUF to start chatting
- Docker Model Runner
How to use ReallyFloppyPenguin/Arch-Agent-7B-GGUF with Docker Model Runner:
docker model run hf.co/ReallyFloppyPenguin/Arch-Agent-7B-GGUF:F16
- Lemonade
How to use ReallyFloppyPenguin/Arch-Agent-7B-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull ReallyFloppyPenguin/Arch-Agent-7B-GGUF:F16
Run and chat with the model
lemonade run user.Arch-Agent-7B-GGUF-F16
List all available models
lemonade list
katanemo/Arch-Agent-7B - GGUF
This repository contains GGUF quantizations of katanemo/Arch-Agent-7B.
About GGUF
GGUF is a quantization method that allows you to run large language models on consumer hardware by reducing the precision of the model weights.
Files
| Filename | Quant type | File Size | Description |
|---|---|---|---|
| model-f16.gguf | f16 | Large | Original precision |
| model-q4_0.gguf | Q4_0 | Small | 4-bit quantization |
| model-q4_1.gguf | Q4_1 | Small | 4-bit quantization (higher quality) |
| model-q5_0.gguf | Q5_0 | Medium | 5-bit quantization |
| model-q5_1.gguf | Q5_1 | Medium | 5-bit quantization (higher quality) |
| model-q8_0.gguf | Q8_0 | Large | 8-bit quantization |
Usage
You can use these models with llama.cpp or any other GGUF-compatible inference engine.
llama.cpp
./llama-cli -m model-q4_0.gguf -p "Your prompt here"
Python (using llama-cpp-python)
from llama_cpp import Llama
llm = Llama(model_path="model-q4_0.gguf")
output = llm("Your prompt here", max_tokens=512)
print(output['choices'][0]['text'])
Original Model
This is a quantized version of katanemo/Arch-Agent-7B. Please refer to the original model card for more information about the model's capabilities, training data, and usage guidelines.
Conversion Details
- Converted using llama.cpp
- Original model downloaded from Hugging Face
- Multiple quantization levels provided for different use cases
License
This model inherits the license from the original model. Please check the original model's license for usage terms.
- Downloads last month
- 24
4-bit
5-bit
8-bit
16-bit
Model tree for ReallyFloppyPenguin/Arch-Agent-7B-GGUF
Base model
Qwen/Qwen2.5-7B