How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="FreedomIntelligence/phoenix-inst-chat-7b")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/phoenix-inst-chat-7b")
model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/phoenix-inst-chat-7b")
Quick Links

Please see our LLMZoo project: https://github.com/FreedomIntelligence/LLMZoo.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 40.69
ARC (25-shot) 44.71
HellaSwag (10-shot) 63.23
MMLU (5-shot) 39.06
TruthfulQA (0-shot) 47.08
Winogrande (5-shot) 62.83
GSM8K (5-shot) 1.29
DROP (3-shot) 26.62
Downloads last month
848
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Spaces using FreedomIntelligence/phoenix-inst-chat-7b 42