Feature Extraction
sentence-transformers
PyTorch
Safetensors
English
roberta
language
granite
embeddings
sparse-encoder
sparse
splade
text-embeddings-inference
Instructions to use seerware/granite-embedding-30m-sparse with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use seerware/granite-embedding-30m-sparse with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("seerware/granite-embedding-30m-sparse") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 81faf01cfa9d880714ca7a3aeaff38c6596fd5bfadccef72b59ce6435f6bb46c
- Size of remote file:
- 310 MB
- SHA256:
- 414e5f24f6a14f6c336c9ca2174c3a9dfb8a803b29ed5fc22cbbdeb696c5c2cf
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.