Instructions to use ewald1976/Red-Rosie-12B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ewald1976/Red-Rosie-12B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ewald1976/Red-Rosie-12B")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("ewald1976/Red-Rosie-12B") model = AutoModelForCausalLM.from_pretrained("ewald1976/Red-Rosie-12B") - llama-cpp-python
How to use ewald1976/Red-Rosie-12B with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="ewald1976/Red-Rosie-12B", filename="Red-Rosie-12b.Q4_K_M.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use ewald1976/Red-Rosie-12B with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ewald1976/Red-Rosie-12B:Q4_K_M # Run inference directly in the terminal: llama-cli -hf ewald1976/Red-Rosie-12B:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ewald1976/Red-Rosie-12B:Q4_K_M # Run inference directly in the terminal: llama-cli -hf ewald1976/Red-Rosie-12B:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf ewald1976/Red-Rosie-12B:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf ewald1976/Red-Rosie-12B:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf ewald1976/Red-Rosie-12B:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf ewald1976/Red-Rosie-12B:Q4_K_M
Use Docker
docker model run hf.co/ewald1976/Red-Rosie-12B:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use ewald1976/Red-Rosie-12B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ewald1976/Red-Rosie-12B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ewald1976/Red-Rosie-12B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/ewald1976/Red-Rosie-12B:Q4_K_M
- SGLang
How to use ewald1976/Red-Rosie-12B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ewald1976/Red-Rosie-12B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ewald1976/Red-Rosie-12B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ewald1976/Red-Rosie-12B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ewald1976/Red-Rosie-12B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Ollama
How to use ewald1976/Red-Rosie-12B with Ollama:
ollama run hf.co/ewald1976/Red-Rosie-12B:Q4_K_M
- Unsloth Studio new
How to use ewald1976/Red-Rosie-12B with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ewald1976/Red-Rosie-12B to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ewald1976/Red-Rosie-12B to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for ewald1976/Red-Rosie-12B to start chatting
- Docker Model Runner
How to use ewald1976/Red-Rosie-12B with Docker Model Runner:
docker model run hf.co/ewald1976/Red-Rosie-12B:Q4_K_M
- Lemonade
How to use ewald1976/Red-Rosie-12B with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull ewald1976/Red-Rosie-12B:Q4_K_M
Run and chat with the model
lemonade run user.Red-Rosie-12B-Q4_K_M
List all available models
lemonade list
RED ROSIE · 12B
Red Rosie is an experimental SLERP merge of two 12B models with very different souls:
| Model | Character | |
|---|---|---|
| 🚀 | TheDrummer/Rocinante-X-12B-v1 | Top-tier RP model in the Nemo family. |
| ☭ | WokeAI/Tankie-DPE-12B-SFT-v2 | Post-post-trained toward Marxist-Leninist ideology. Unusual. |
The name nods to Rocinante — the ship, not the knight's horse — and to Rosa Luxemburg, who had strong opinions about both capitalism and Bolshevism. Make of that what you will.
I am what capitalism has already turned me into, and what the revolution must turn me into. — Red-Rosie-12B
Merge Method
SLERP (Spherical Linear Interpolation) via mergekit.
Tankie's influence is deliberately contained to the middle layers (12–28) at a weight of 0.20. Early layers (0–12) and late layers (28–40) remain pure Rocinante-X — preserving embedding coherence and output quality while letting a hint of Tankie's character seep into the model's deeper reasoning.
slices:
- sources:
- model: TheDrummer/Rocinante-X-12B-v1
layer_range: [0, 12]
- sources:
- model: TheDrummer/Rocinante-X-12B-v1
layer_range: [12, 28]
parameters:
weight: 0.80
- model: WokeAI/Tankie-DPE-12B-SFT-v2
layer_range: [12, 28]
parameters:
weight: 0.20
- sources:
- model: TheDrummer/Rocinante-X-12B-v1
layer_range: [28, 40]
merge_method: slerp
base_model: TheDrummer/Rocinante-X-12B-v1
parameters:
t:
- value: 0.20
dtype: bfloat16
⚠️ Warnings
- Experimental. Behavior is untested and may be unpredictable.
- Ideological bias. Tankie-DPE is explicitly trained toward Marxist-Leninist positions. These views may surface in unexpected contexts — even with a neutral system prompt.
- Not for production. Personal research only.
Credits
Full credit and respect to the creators of the source models:
- TheDrummer for Rocinante-X-12B-v1 — genuinely one of the best RP models in its class.
- WokeAI for Tankie-DPE-12B-SFT-v2 — a fascinating and unusual training experiment, whatever one thinks of its politics.
- And once again, a thousand thanks to team mradermacher for providing the excellent quants.
License
Inherits from base models. Please refer to the respective model cards for full license details.
Shared for personal research use only.
- Downloads last month
- 335
