Instructions to use argmining-vaccines/spanberta-base-cased-scientific_authority with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use argmining-vaccines/spanberta-base-cased-scientific_authority with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="argmining-vaccines/spanberta-base-cased-scientific_authority")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("argmining-vaccines/spanberta-base-cased-scientific_authority") model = AutoModelForTokenClassification.from_pretrained("argmining-vaccines/spanberta-base-cased-scientific_authority") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 69ded3d9ae1d88eb7632880cb19c8c7bc67c765ca4492e4414bfc40d14159e74
- Size of remote file:
- 496 MB
- SHA256:
- 3fc563d28ad5d4b3421e884dbfa2e2693fe8a55de21fc3c75714de5a2c8f20db
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.