BERTForNextSentencePrediction

class lucid.models.BERTForNextSentencePrediction(config: BERTConfig)

The BERTForNextSentencePrediction class applies the NSP head over pooled BERT outputs.

Class Signature

class BERTForNextSentencePrediction(config: BERTConfig)

Parameters

  • config (BERTConfig): BERT configuration with pooling enabled.

Methods

BERTForNextSentencePrediction.forward(input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None) Tensor

Compute NSP logits from pooled BERT outputs.

BERTForNextSentencePrediction.get_loss(labels: Tensor, input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None, *, reduction: str | None = 'mean') Tensor

Compute next-sentence prediction classification loss.

BERTForNextSentencePrediction.predict_labels(input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None) Tensor

Return predicted NSP labels by argmax over logits.

BERTForNextSentencePrediction.predict_proba(input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None) Tensor

Return NSP class probabilities via softmax.

BERTForNextSentencePrediction.get_accuracy(labels: Tensor, input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None) Tensor

Compute NSP classification accuracy.

BERTForNextSentencePrediction.get_loss_from_text(tokenizer: BERTTokenizerFast, text_a: str, text_b: str | None = None, labels: int | Tensor = 0, *, device: Literal['cpu', 'gpu'] = 'cpu', reduction: str | None = 'mean') Tensor

Compute NSP loss directly from text pairs.

BERTForNextSentencePrediction.predict_labels_from_text(tokenizer: BERTTokenizerFast, text_a: str, text_b: str | None = None, *, device: Literal['cpu', 'gpu'] = 'cpu') Tensor

Predict NSP labels directly from text pairs.

BERTForNextSentencePrediction.predict_proba_from_text(tokenizer: BERTTokenizerFast, text_a: str, text_b: str | None = None, *, device: Literal['cpu', 'gpu'] = 'cpu') Tensor

Predict NSP probabilities directly from text pairs.

Examples

>>> import lucid.models as models
>>> model = models.bert_for_next_sentence_prediction_base()
>>> print(model)
BERTForNextSentencePrediction(...)
>>> logits = model(input_ids=input_ids)
>>> loss = model.get_loss(labels=labels, input_ids=input_ids)
>>> acc = model.get_accuracy(labels=labels, input_ids=input_ids)
>>> tokenizer = models.BERTTokenizerFast.from_pretrained(".data/bert/pretrained")
>>> loss = model.get_loss_from_text(
...     tokenizer=tokenizer,
...     text_a="A cat sits on the mat.",
...     text_b="It starts to rain outside.",
...     labels=0,
...     device="gpu",
... )
>>> pred = model.predict_labels_from_text(
...     tokenizer=tokenizer,
...     text_a="A cat sits on the mat.",
...     text_b="It starts to rain outside.",
...     device="gpu",
... )