locuslab/TOFU
Viewer • Updated • 18.1k • 125k • 51
How to use OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat")
model = AutoModelForCausalLM.from_pretrained("OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat")How to use OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat
How to use OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat with Docker Model Runner:
docker model run hf.co/OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat
This model uses the SimNPO unlearning algorithm with the following optimization objective:
Unlearning hyper-parameters:
1e-52.50.13750.0import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat", use_flash_attention_2=True, torch_dtype=torch.bfloat16, trust_remote_code=True)
| Forgeting Quality (FQ) | Model Utility (MU) | |
|---|---|---|
| Origin | 0.00 | 0.62 |
| Retrain | 1.00 | 0.62 |
| NPO | 0.79 | 0.57 |
| SimNPO | 0.99 | 0.58 |
If you use this model in your research, please cite:
@article{fan2024simplicity,
title={Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning},
author={Fan, Chongyu and Liu, Jiancheng and Lin, Licong and Jia, Jinghan and Zhang, Ruiqi and Mei, Song and Liu, Sijia},
journal={arXiv preprint arXiv:2410.07163},
year={2024}
}
Reporting issues with the model: github.com/OPTML-Group/Unlearn-Simple
Base model
NousResearch/Llama-2-7b-chat-hf