--- title: SimpleChatbot emoji: πŸ€– colorFrom: purple colorTo: pink sdk: streamlit sdk_version: 1.31.1 app_file: app.py pinned: false license: mit models: - openai/gpt-oss-120b - mistralai/Mistral-7B-Instruct-v0.2 - google/gemma-3-27b-it - google/gemma-2-27b-it - google/gemma-2-9b-it - google/gemma-2-2b-it - HuggingFaceH4/zephyr-7b-gemma-v0.1 - HuggingFaceH4/zephyr-7b-beta - meta-llama/Meta-Llama-3-8B-Instruct - meta-llama/Meta-Llama-3.1-8B-Instruct - deepseek-ai/DeepSeek-R1-Distill-Llama-70B - Qwen/Qwen2.5-Coder-32B-Instruct - moonshotai/Kimi-K2-Instruct --- # πŸ€– SimpleChatbot [![Streamlit](https://img.shields.io/badge/Built%20with-Streamlit-FF4B4B?logo=streamlit&logoColor=white)](https://streamlit.io/) [![Hugging Face Space](https://img.shields.io/badge/Hugging%20Face-Space-yellow?logo=huggingface)](https://huggingface.co/spaces) [![Python](https://img.shields.io/badge/Python-3.9%2B-blue?logo=python&logoColor=white)](https://www.python.org/) [![Open Source](https://img.shields.io/badge/Open%20Source-Yes-green)]() **SimpleChatbot** is a lightweight Streamlit app that lets you chat with multiple large language models (LLMs) from a single interface. It’s built as a simple, interactive way to explore and compare different models without switching tools or setups. --- ## ✨ Features - Chat with **multiple LLMs** from one UI - Switch models on the fly (conversation resets automatically) - Adjustable **temperature** for response creativity - Built-in **rate limiting & cooldowns** for fair usage - Optional **Hugging Face access token** support - Session-based logging and usage tracking - Clean, minimal Streamlit interface --- ## 🧠 Supported Models This app supports a mix of general-purpose, reasoning, and coding-focused models, including: - OpenAI GPT-OSS-120B - Meta Llama 3 / 3.1 - Google Gemma (2 & 3 series) - DeepSeek R1 (distilled) - Qwen 2.5 Coder - Kimi-K2 Instruct - Zephyr & Mistral variants > Available models may change depending on backend availability. --- ## 🎯 Why This Exists This project was built to: - **Quickly compare LLM behavior** across different model families - Provide a **simple reference implementation** for multi-model chat apps - Demonstrate how to build a Streamlit chatbot using modern inference APIs - Serve as a lightweight demo for experimentation, education, and prototyping It intentionally avoids complex agent logic or tooling to keep the focus on **model responses and UX**. --- ## πŸš€ How to Use 1. Select a **model client** in the sidebar 2. Choose a **model** 3. Adjust the **temperature** if desired 4. Start chatting! Responses are intentionally kept brief to keep interactions fast and readable. --- ## ⚠️ Notes - Generated content may be **inaccurate or incorrect** - API calls are limited per session to prevent abuse - If a model fails, it may be temporarily unavailable or updating --- ## πŸ‘€ Author Created by **Nigel Gebodh** 🌐 https://ngebodh.github.io/ Learn how to build this chatbot yourself: πŸ‘‰ https://ngebodh.github.io/projects/2024-03-05/