tmnam20/VieGLUE
Updated • 181 • 1
How to use tmnam20/xlm-roberta-base-vsfc-1 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="tmnam20/xlm-roberta-base-vsfc-1") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("tmnam20/xlm-roberta-base-vsfc-1")
model = AutoModelForSequenceClassification.from_pretrained("tmnam20/xlm-roberta-base-vsfc-1")This model is a fine-tuned version of xlm-roberta-base on the tmnam20/VieGLUE/VSFC dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 0.207 | 1.4 | 500 | 0.2353 | 0.9400 |
| 0.1438 | 2.79 | 1000 | 0.2245 | 0.9450 |
Base model
FacebookAI/xlm-roberta-base