BERTForSequenceClassification

class lucid.models.BERTForSequenceClassification(config: BERTConfig, num_labels: int = 2)

The BERTForSequenceClassification class applies a classification head to pooled BERT representations.

Class Signature

class BERTForSequenceClassification(config: BERTConfig, num_labels: int = 2)

Parameters

  • config (BERTConfig): BERT configuration with pooling enabled.

  • num_labels (int, optional): Number of target classes. Default is 2.

Methods

BERTForSequenceClassification.forward(input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None) Tensor

Compute sequence-level classification logits from pooled BERT features.

BERTForSequenceClassification.get_loss(labels: Tensor, input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None, *, reduction: str | None = 'mean') Tensor

Compute classification loss for sequence labels.

BERTForSequenceClassification.predict_labels(input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None) Tensor

Return predicted sequence labels by argmax over logits.

BERTForSequenceClassification.predict_proba(input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None) Tensor

Return class probabilities via softmax.

BERTForSequenceClassification.get_accuracy(labels: Tensor, input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None) Tensor

Compute sequence classification accuracy.

BERTForSequenceClassification.get_loss_from_text(tokenizer: BERTTokenizerFast, text_a: str, text_b: str | None = None, labels: int | Tensor = 0, *, device: Literal['cpu', 'gpu'] = 'cpu', reduction: str | None = 'mean') Tensor

Compute sequence classification loss directly from raw text input.

BERTForSequenceClassification.predict_labels_from_text(tokenizer: BERTTokenizerFast, text_a: str, text_b: str | None = None, *, device: Literal['cpu', 'gpu'] = 'cpu') Tensor

Predict sequence labels directly from raw text input.

BERTForSequenceClassification.predict_proba_from_text(tokenizer: BERTTokenizerFast, text_a: str, text_b: str | None = None, *, device: Literal['cpu', 'gpu'] = 'cpu') Tensor

Predict class probabilities directly from raw text input.

Examples

>>> import lucid.models as models
>>> model = models.BERTForSequenceClassification(
...     models.BERTConfig.base(add_pooling_layer=True),
...     num_labels=2,
... )
>>> print(model)
BERTForSequenceClassification(...)
>>> logits = model(input_ids=input_ids, attention_mask=attention_mask)
>>> loss = model.get_loss(labels=labels, input_ids=input_ids, attention_mask=attention_mask)
>>> acc = model.get_accuracy(labels=labels, input_ids=input_ids, attention_mask=attention_mask)
>>> tokenizer = models.BERTTokenizerFast.from_pretrained(".data/bert/pretrained")
>>> loss = model.get_loss_from_text(
...     tokenizer=tokenizer,
...     text_a="This movie is excellent.",
...     labels=1,
...     device="gpu",
... )
>>> pred = model.predict_labels_from_text(
...     tokenizer=tokenizer,
...     text_a="This movie is excellent.",
...     device="gpu",
... )