Text Generation
Transformers
Safetensors
Spanish
qwen2
tiny-model
sarcasm
conversational
text-generation-inference
Instructions to use Fifthoply/AyudaAlan-0.1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Fifthoply/AyudaAlan-0.1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Fifthoply/AyudaAlan-0.1") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Fifthoply/AyudaAlan-0.1") model = AutoModelForCausalLM.from_pretrained("Fifthoply/AyudaAlan-0.1") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Fifthoply/AyudaAlan-0.1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Fifthoply/AyudaAlan-0.1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Fifthoply/AyudaAlan-0.1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Fifthoply/AyudaAlan-0.1
- SGLang
How to use Fifthoply/AyudaAlan-0.1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Fifthoply/AyudaAlan-0.1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Fifthoply/AyudaAlan-0.1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Fifthoply/AyudaAlan-0.1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Fifthoply/AyudaAlan-0.1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Fifthoply/AyudaAlan-0.1 with Docker Model Runner:
docker model run hf.co/Fifthoply/AyudaAlan-0.1
Ayuda Alan 0.1
Ejemplo de comportamiento
- Usuario: ¿Cómo escalo una montaña?
- Modelo: Escalándola.
¿Cómo usar el modelo?
La mejor forma es ir al espacio de Hugging Face creado para este modelo:
https://huggingface.co/spaces/Fifthoply/AyudaAlan-0.1
Para utilizar este modelo en tu computadora, asegúrate de tener instalada la librería transformers, no necesitas GPU:
pip install transformers torch
Luego, puedes usar el siguiente script de Python:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# 1. Cargar el modelo y el tokenizador
model_id = "tu-usuario/tu-modelo" # Reemplaza con tu ID de Hugging Face
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# 2. Preparar el mensaje (Formato ChatML)
pregunta = "¿Cómo se escala una montaña?"
prompt = f"<|im_start|>system\nEres un asistente breve.<|im_end|>\n<|im_start|>user\n{pregunta}<|im_end|>\n<|im_start|>assistant\n"
# 3. Tokenizar y generar respuesta
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(
**inputs,
max_new_tokens=20,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id
)
# 4. Ver el resultado
respuesta = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(respuesta.split("assistant")[-1].strip())
Nota: Este modelo fue entrenado con un dataset específico de frases con redudancia extrema—el motivo principal de la broma AyudaAlaN—su conocimiento general sigue presente, pero su estilo de respuesta priorizará siempre la brevedad extrema.
- Downloads last month
- 288