Instructions to use afrideva/TinyKAI-3B-beta-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use afrideva/TinyKAI-3B-beta-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="afrideva/TinyKAI-3B-beta-GGUF", filename="tinykai-3b-beta.q2_k.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use afrideva/TinyKAI-3B-beta-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf afrideva/TinyKAI-3B-beta-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf afrideva/TinyKAI-3B-beta-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf afrideva/TinyKAI-3B-beta-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf afrideva/TinyKAI-3B-beta-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf afrideva/TinyKAI-3B-beta-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf afrideva/TinyKAI-3B-beta-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf afrideva/TinyKAI-3B-beta-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf afrideva/TinyKAI-3B-beta-GGUF:Q4_K_M
Use Docker
docker model run hf.co/afrideva/TinyKAI-3B-beta-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use afrideva/TinyKAI-3B-beta-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "afrideva/TinyKAI-3B-beta-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "afrideva/TinyKAI-3B-beta-GGUF", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/afrideva/TinyKAI-3B-beta-GGUF:Q4_K_M
- Ollama
How to use afrideva/TinyKAI-3B-beta-GGUF with Ollama:
ollama run hf.co/afrideva/TinyKAI-3B-beta-GGUF:Q4_K_M
- Unsloth Studio new
How to use afrideva/TinyKAI-3B-beta-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for afrideva/TinyKAI-3B-beta-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for afrideva/TinyKAI-3B-beta-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for afrideva/TinyKAI-3B-beta-GGUF to start chatting
- Docker Model Runner
How to use afrideva/TinyKAI-3B-beta-GGUF with Docker Model Runner:
docker model run hf.co/afrideva/TinyKAI-3B-beta-GGUF:Q4_K_M
- Lemonade
How to use afrideva/TinyKAI-3B-beta-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull afrideva/TinyKAI-3B-beta-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.TinyKAI-3B-beta-GGUF-Q4_K_M
List all available models
lemonade list
Keynote-Technology/TinyKAI-3B-beta-GGUF
Quantized GGUF model files for TinyKAI-3B-beta from Keynote-Technology
| Name | Quant method | Size |
|---|---|---|
| tinykai-3b-beta.q2_k.gguf | q2_k | 2.15 GB |
| tinykai-3b-beta.q3_k_m.gguf | q3_k_m | 2.27 GB |
| tinykai-3b-beta.q4_k_m.gguf | q4_k_m | 2.58 GB |
| tinykai-3b-beta.q5_k_m.gguf | q5_k_m | 2.76 GB |
| tinykai-3b-beta.q6_k.gguf | q6_k | 3.64 GB |
| tinykai-3b-beta.q8_0.gguf | q8_0 | 3.64 GB |
Original Model Card:
TinyKAI 3B is a fine-tuned LLM (Large Language Model) based off of OpenLlama 3B v2. The TinyKAI models are a series of lightweight LLMs under 5 Billion parameters, usually used for research.
Direct Use
TinyKAI 3B is optimal for research on large language models, specifically the influence of web data on the properties of large language models (fairness, safety, limitations, capabilities, etc.).
Training
This model was trained on a mixture of the Falcon refined-web dataset, the StarCoder dataset and the wikipedia, arxiv, book and stackexchange part of the RedPajama dataset.
Banned Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful or insulting to anyone or any certain group.
Limitations
TinyKAI 3B is trained on English data only, and will not generate appropriately reasonable content in other languages. Being trained on a representative of the web, it will carry the stereotypes and biases commonly encountered online.
Recommendations
We recommend users of TinyKAI 3B to consider finetuning it for personal use, and for precautions to be taken for any commercial use.
WARNING!
This model runs on an older version of transformers, v4.10.0, and therefore may be unstable.
- Downloads last month
- 77
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for afrideva/TinyKAI-3B-beta-GGUF
Base model
Keynote-Technology/TinyKAI-3B-v0.1