Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

nickprock
/
cross-encoder-italian-bert-stsb

Text Ranking
sentence-transformers
PyTorch
Safetensors
Italian
bert
cross-encoder
text-embeddings-inference
Model card Files Files and versions
xet
Community
2

Instructions to use nickprock/cross-encoder-italian-bert-stsb with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use nickprock/cross-encoder-italian-bert-stsb with sentence-transformers:

    from sentence_transformers import CrossEncoder
    
    model = CrossEncoder("nickprock/cross-encoder-italian-bert-stsb")
    
    query = "Which planet is known as the Red Planet?"
    passages = [
    	"Venus is often called Earth's twin because of its similar size and proximity.",
    	"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
    	"Jupiter, the largest planet in our solar system, has a prominent red spot.",
    	"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
    ]
    
    scores = model.predict([(query, passage) for passage in passages])
    print(scores)
  • Notebooks
  • Google Colab
  • Kaggle
cross-encoder-italian-bert-stsb
Ctrl+K
Ctrl+K
  • 3 contributors
History: 7 commits
nickprock's picture
nickprock
tomaarsen's picture
tomaarsen HF Staff
Update model metadata to set pipeline tag to the new `text-ranking` (#2)
f26d9b9 verified about 1 year ago
  • .gitattributes
    1.48 kB
    initial commit about 3 years ago
  • CECorrelationEvaluator_results.csv
    400 Bytes
    Upload 7 files about 3 years ago
  • README.md
    1.2 kB
    Update model metadata to set pipeline tag to the new `text-ranking` (#2) about 1 year ago
  • config.json
    735 Bytes
    Upload 7 files about 3 years ago
  • model.safetensors
    440 MB
    xet
    Adding `safetensors` variant of this model (#1) over 1 year ago
  • pytorch_model.bin
    440 MB
    xet
    Upload 7 files about 3 years ago
  • special_tokens_map.json
    125 Bytes
    Upload 7 files about 3 years ago
  • tokenizer.json
    732 kB
    Upload 7 files about 3 years ago
  • tokenizer_config.json
    379 Bytes
    Upload 7 files about 3 years ago
  • vocab.txt
    243 kB
    Upload 7 files about 3 years ago