Transformers documentation
JinaEmbeddingsV3
This model was released on 2024-09-16 and added to Hugging Face Transformers on 2026-03-18.
JinaEmbeddingsV3
The Jina-Embeddings-v3 is a multilingual, multi-task text embedding model designed for a variety of NLP applications. Based on the XLM-RoBERTa architecture, this model supports Rotary Position Embeddings (RoPE) replacing absolute position embeddings to support long input sequences up to 8192 tokens. Additionally, it features 5 built-in Task-Specific LoRA Adapters: that allow the model to generate task-specific embeddings (e.g., for retrieval vs. classification) without increasing inference latency significantly.
You can find the original Jina Embeddings v3 checkpoints under the Jina AI organization.
Click on the Jina Embeddings v3 models in the right sidebar for more examples of how to apply the model to different language tasks.
The example below demonstrates how to extract features (embeddings) with Pipeline, AutoModel, and from the command line.
import torch
from transformers import pipeline
pipeline = pipeline(
task="feature-extraction",
model="jinaai/jina-embeddings-v3-hf",
)
# Returns a list of lists containing the embeddings for each token
embeddings = pipeline("Jina Embeddings V3 is great for semantic search.")Task-Specific LoRA Adapters
A key feature of JinaEmbeddingsV3 is it’s LoRA adapters, which allow you to tailor the output embeddings to specific useful use cases without the overhead of loading entirely different models.
The following tasks are supported:
retrieval.query: Used for query embeddings in asymmetric retrieval tasks (e.g., search queries).retrieval.passage: Used for passage embeddings in asymmetric retrieval tasks (e.g., the documents being searched).separation: Used for embeddings in clustering and re-ranking applications.classification: Used for embeddings in classification tasks.text-matching: Used for embeddings in tasks that quantify similarity between two texts, such as Semantic Textual Similarity (STS) or symmetric retrieval tasks.
To generate high-quality sentence or paragraph embeddings, you need to apply mean pooling to the model’s token embeddings. Mean pooling takes all token embeddings from the model’s output and averages them, masking out the padding tokens.
Here is how you can generate sentence embeddings tailored for a retrieval query task using the AutoModel API.
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
def mean_pooling(model_output, attention_mask):
# First element of model_output contains all token embeddings
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
# Sum the embeddings and divide by the number of non-padding tokens
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
sentences = [
"How is the weather today?",
"What is the current weather like today?"
]
tokenizer = AutoTokenizer.from_pretrained("jinaai/jina-embeddings-v3-hf")
model = AutoModel.from_pretrained("jinaai/jina-embeddings-v3-hf")
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt").to(model.device)
# Set up the adapter mask for your specific task
task = 'retrieval_query' # Can be any of (retrieval_passage, separation, classification, text_matching) depending on the use-case.
model.load_adapter("jinaai/jina-embeddings-v3-hf", adapter_name=task, adapter_kwargs={"subfolder": task})
model.set_adapter(task)
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = mean_pooling(model_output, encoded_input["attention_mask"])
embeddings = F.normalize(embeddings, p=2, dim=1)
print(embeddings.shape)
# Output: torch.Size([2, 1024])JinaEmbeddingsV3Config
class transformers.JinaEmbeddingsV3Config
< source >( output_hidden_states: bool | None = False return_dict: bool | None = True dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None chunk_size_feed_forward: int = 0 is_encoder_decoder: bool = False id2label: dict[int, str] | dict[str, str] | None = None label2id: dict[str, int] | dict[str, str] | None = None problem_type: typing.Optional[typing.Literal['regression', 'single_label_classification', 'multi_label_classification']] = None tokenizer_class: str | None = None vocab_size: int = 250002 hidden_size: int = 1024 num_hidden_layers: int = 24 num_attention_heads: int = 16 intermediate_size: int = 4096 hidden_act: str = 'gelu' hidden_dropout_prob: float = 0.1 attention_probs_dropout_prob: float = 0.1 max_position_embeddings: int = 8194 type_vocab_size: int = 1 initializer_range: float = 0.02 layer_norm_eps: float = 1e-05 pad_token_id: int | None = 1 bos_token_id: int | None = 0 eos_token_id: int | None = 2 use_cache: bool = True classifier_dropout: float | int | None = None tie_word_embeddings: bool = True rope_parameters: transformers.modeling_rope_utils.RopeParameters | dict | None = None )
Parameters
- output_hidden_states (
bool, optional, defaults toFalse) — Whether or not the model should return all hidden-states. - return_dict (
bool, optional, defaults toTrue) — Whether to return aModelOutput(dataclass) instead of a plain tuple. - dtype (
Union[str, torch.dtype], optional) — The chunk size of all feed forward layers in the residual attention blocks. A chunk size of0means that the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processesn< sequence_length embeddings at a time. For more information on feed forward chunking, see How does Feed Forward Chunking work?. - chunk_size_feed_forward (
int, optional, defaults to0) — Thedtypeof the weights. This attribute can be used to initialize the model to a non-defaultdtype(which is normallyfloat32) and thus allow for optimal storage allocation. For example, if the saved model isfloat16, ideally we want to load it back using the minimal amount of memory needed to loadfloat16weights. - is_encoder_decoder (
bool, optional, defaults toFalse) — Whether the model is used as an encoder/decoder or not. - id2label (
Union[dict[int, str], dict[str, str]], optional) — A map from index (for instance prediction index, or target index) to label. - label2id (
Union[dict[str, int], dict[str, str]], optional) — A map from label to index for the model. - problem_type (
Literal[regression, single_label_classification, multi_label_classification], optional) — Problem type forXxxForSequenceClassificationmodels. Can be one of"regression","single_label_classification"or"multi_label_classification". - tokenizer_class (
str, optional) — The class name of model’s tokenizer. - vocab_size (
int, optional, defaults to250002) — Vocabulary size of the model. Defines the number of different tokens that can be represented by theinput_ids. - hidden_size (
int, optional, defaults to1024) — Dimension of the hidden representations. - num_hidden_layers (
int, optional, defaults to24) — Number of hidden layers in the Transformer decoder. - num_attention_heads (
int, optional, defaults to16) — Number of attention heads for each attention layer in the Transformer decoder. - intermediate_size (
int, optional, defaults to4096) — Dimension of the MLP representations. - hidden_act (
str, optional, defaults togelu) — The non-linear activation function (function or string) in the decoder. For example,"gelu","relu","silu", etc. - hidden_dropout_prob (
float, optional, defaults to0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - attention_probs_dropout_prob (
float, optional, defaults to0.1) — The dropout ratio for the attention probabilities. - max_position_embeddings (
int, optional, defaults to8194) — The maximum sequence length that this model might ever be used with. - type_vocab_size (
int, optional, defaults to1) — The vocabulary size of thetoken_type_ids. - initializer_range (
float, optional, defaults to0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - layer_norm_eps (
float, optional, defaults to1e-05) — The epsilon used by the layer normalization layers. - pad_token_id (
int, optional, defaults to1) — Token id used for padding in the vocabulary. - bos_token_id (
int, optional, defaults to0) — Token id used for beginning-of-stream in the vocabulary. - eos_token_id (
int, optional, defaults to2) — Token id used for end-of-stream in the vocabulary. - use_cache (
bool, optional, defaults toTrue) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant ifconfig.is_decoder=Trueor when the model is a decoder-only generative model. - classifier_dropout (
Union[float, int], optional) — The dropout ratio for classifier. - tie_word_embeddings (
bool, optional, defaults toTrue) — Whether to tie weight embeddings according to model’stied_weights_keysmapping. - rope_parameters (
Union[~modeling_rope_utils.RopeParameters, dict], optional) — Dictionary containing the configuration parameters for the RoPE embeddings. The dictionary should contain a value forrope_thetaand optionally parameters used for scaling in case you want to use RoPE with longermax_position_embeddings.
This is the configuration class to store the configuration of a JinaEmbeddingsV3Model. It is used to instantiate a Jina Embeddings V3 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the jinaai/jina-embeddings-v3-hf
Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.
Examples:
>>> from transformers import JinaEmbeddingsV3Config, JinaEmbeddingsV3Model
>>> # Initializing a Jina-Embeddings-V3 jinaai/jina-embeddings-v3-hf style configuration
>>> configuration = JinaEmbeddingsV3Config()
>>> # Initializing a model (with random weights) from the jinaai/jina-embeddings-v3-hf style configuration
>>> model = JinaEmbeddingsV3Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.configJinaEmbeddingsV3Model
class transformers.JinaEmbeddingsV3Model
< source >( config: JinaEmbeddingsV3Config add_pooling_layer = True )
Parameters
- config (JinaEmbeddingsV3Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
- add_pooling_layer (
bool, optional, defaults toTrue) — Whether to add a pooling layer
The bare Jina Embeddings V3 Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: torch.Tensor | None = None attention_mask: torch.Tensor | None = None token_type_ids: torch.Tensor | None = None position_ids: torch.Tensor | None = None inputs_embeds: torch.Tensor | None = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) → BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.Tensorof shape(batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.Tensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- token_type_ids (
torch.Tensorof shape(batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
- position_ids (
torch.Tensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1]. - inputs_embeds (
torch.Tensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix.
Returns
BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (JinaEmbeddingsV3Config) and inputs.
The JinaEmbeddingsV3Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.pooler_output (
torch.FloatTensorof shape(batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
JinaEmbeddingsV3ForMaskedLM
class transformers.JinaEmbeddingsV3ForMaskedLM
< source >( config )
Parameters
- config (JinaEmbeddingsV3ForMaskedLM) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The Jina Embeddings V3 Model with a language modeling head on top.”
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: torch.LongTensor | None = None attention_mask: torch.FloatTensor | None = None token_type_ids: torch.LongTensor | None = None position_ids: torch.LongTensor | None = None inputs_embeds: torch.FloatTensor | None = None labels: torch.LongTensor | None = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) → MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0,1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with
type_vocab_sizeparameter with value= 2. All the value in this tensor should be always < type_vocab_size.
- position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1]. - inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. - labels (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in[-100, 0, ..., config.vocab_size](seeinput_idsdocstring) Tokens with indices set to-100are ignored (masked), the loss is only computed for the tokens with labels in[0, ..., config.vocab_size]
Returns
MaskedLMOutput or tuple(torch.FloatTensor)
A MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (JinaEmbeddingsV3Config) and inputs.
The JinaEmbeddingsV3ForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) — Masked language modeling (MLM) loss.logits (
torch.FloatTensorof shape(batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Example:
>>> from transformers import AutoTokenizer, JinaEmbeddingsV3ForMaskedLM
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("jinaai/jina-embeddings-v3-hf")
>>> model = JinaEmbeddingsV3ForMaskedLM.from_pretrained("jinaai/jina-embeddings-v3-hf")
>>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # retrieve index of <mask>
>>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
>>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
>>> tokenizer.decode(predicted_token_id)
...
>>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
>>> # mask labels of non-<mask> tokens
>>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
>>> outputs = model(**inputs, labels=labels)
>>> round(outputs.loss.item(), 2)
...JinaEmbeddingsV3ForSequenceClassification
class transformers.JinaEmbeddingsV3ForSequenceClassification
< source >( config )
Parameters
- config (JinaEmbeddingsV3ForSequenceClassification) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
XLM-RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: torch.LongTensor | None = None attention_mask: torch.FloatTensor | None = None token_type_ids: torch.LongTensor | None = None position_ids: torch.LongTensor | None = None inputs_embeds: torch.FloatTensor | None = None labels: torch.LongTensor | None = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) → SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0,1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with
type_vocab_sizeparameter with value= 2. All the value in this tensor should be always < type_vocab_size.
- position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1]. - inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. - labels (
torch.LongTensorof shape(batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in[0, ..., config.num_labels - 1]. Ifconfig.num_labels == 1a regression loss is computed (Mean-Square loss), Ifconfig.num_labels > 1a classification loss is computed (Cross-Entropy).
Returns
SequenceClassifierOutput or tuple(torch.FloatTensor)
A SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (JinaEmbeddingsV3Config) and inputs.
The JinaEmbeddingsV3ForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) — Classification (or regression if config.num_labels==1) loss.logits (
torch.FloatTensorof shape(batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Example of single-label classification:
>>> import torch
>>> from transformers import AutoTokenizer, JinaEmbeddingsV3ForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("jinaai/jina-embeddings-v3-hf")
>>> model = JinaEmbeddingsV3ForSequenceClassification.from_pretrained("jinaai/jina-embeddings-v3-hf")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_id = logits.argmax().item()
>>> model.config.id2label[predicted_class_id]
...
>>> # To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
>>> num_labels = len(model.config.id2label)
>>> model = JinaEmbeddingsV3ForSequenceClassification.from_pretrained("jinaai/jina-embeddings-v3-hf", num_labels=num_labels)
>>> labels = torch.tensor([1])
>>> loss = model(**inputs, labels=labels).loss
>>> round(loss.item(), 2)
...Example of multi-label classification:
>>> import torch
>>> from transformers import AutoTokenizer, JinaEmbeddingsV3ForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("jinaai/jina-embeddings-v3-hf")
>>> model = JinaEmbeddingsV3ForSequenceClassification.from_pretrained("jinaai/jina-embeddings-v3-hf", problem_type="multi_label_classification")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
>>> # To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
>>> num_labels = len(model.config.id2label)
>>> model = JinaEmbeddingsV3ForSequenceClassification.from_pretrained(
... "jinaai/jina-embeddings-v3-hf", num_labels=num_labels, problem_type="multi_label_classification"
... )
>>> labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
>>> loss = model(**inputs, labels=labels).lossJinaEmbeddingsV3ForTokenClassification
class transformers.JinaEmbeddingsV3ForTokenClassification
< source >( config )
Parameters
- config (JinaEmbeddingsV3ForTokenClassification) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The Jina Embeddings V3 transformer with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: torch.LongTensor | None = None attention_mask: torch.FloatTensor | None = None token_type_ids: torch.LongTensor | None = None position_ids: torch.LongTensor | None = None inputs_embeds: torch.FloatTensor | None = None labels: torch.LongTensor | None = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) → TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0,1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with
type_vocab_sizeparameter with value= 2. All the value in this tensor should be always < type_vocab_size.
- position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1]. - inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. - labels (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in[0, ..., config.num_labels - 1].
Returns
TokenClassifierOutput or tuple(torch.FloatTensor)
A TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (JinaEmbeddingsV3Config) and inputs.
The JinaEmbeddingsV3ForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) — Classification loss.logits (
torch.FloatTensorof shape(batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Example:
>>> from transformers import AutoTokenizer, JinaEmbeddingsV3ForTokenClassification
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("jinaai/jina-embeddings-v3-hf")
>>> model = JinaEmbeddingsV3ForTokenClassification.from_pretrained("jinaai/jina-embeddings-v3-hf")
>>> inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_token_class_ids = logits.argmax(-1)
>>> # Note that tokens are classified rather then input words which means that
>>> # there might be more predicted token classes than words.
>>> # Multiple token classes might account for the same word
>>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
>>> predicted_tokens_classes
...
>>> labels = predicted_token_class_ids
>>> loss = model(**inputs, labels=labels).loss
>>> round(loss.item(), 2)
...JinaEmbeddingsV3ForQuestionAnswering
class transformers.JinaEmbeddingsV3ForQuestionAnswering
< source >( config )
Parameters
- config (JinaEmbeddingsV3ForQuestionAnswering) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The Jina Embeddings V3 transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: torch.LongTensor | None = None attention_mask: torch.FloatTensor | None = None token_type_ids: torch.LongTensor | None = None position_ids: torch.LongTensor | None = None inputs_embeds: torch.FloatTensor | None = None start_positions: torch.LongTensor | None = None end_positions: torch.LongTensor | None = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) → QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0,1]:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with
type_vocab_sizeparameter with value= 2. All the value in this tensor should be always < type_vocab_size.
- position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1]. - inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. - start_positions (
torch.LongTensorof shape(batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. - end_positions (
torch.LongTensorof shape(batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.
Returns
QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (JinaEmbeddingsV3Config) and inputs.
The JinaEmbeddingsV3ForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.start_logits (
torch.FloatTensorof shape(batch_size, sequence_length)) — Span-start scores (before SoftMax).end_logits (
torch.FloatTensorof shape(batch_size, sequence_length)) — Span-end scores (before SoftMax).hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Example:
>>> from transformers import AutoTokenizer, JinaEmbeddingsV3ForQuestionAnswering
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("jinaai/jina-embeddings-v3-hf")
>>> model = JinaEmbeddingsV3ForQuestionAnswering.from_pretrained("jinaai/jina-embeddings-v3-hf")
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> answer_start_index = outputs.start_logits.argmax()
>>> answer_end_index = outputs.end_logits.argmax()
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
...
>>> # target is "nice puppet"
>>> target_start_index = torch.tensor([14])
>>> target_end_index = torch.tensor([15])
>>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
>>> loss = outputs.loss
>>> round(loss.item(), 2)
...