BERTForTokenClassification

class lucid.models.BERTForTokenClassification(config: BERTConfig, num_labels: int = 2)

The BERTForTokenClassification class predicts labels for each token from sequence-level hidden states.

Class Signature

class BERTForTokenClassification(config: BERTConfig, num_labels: int = 2)

Parameters

  • config (BERTConfig): BERT configuration for token-level outputs.

  • num_labels (int, optional): Number of target classes. Default is 2.

Methods

BERTForTokenClassification.forward(input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None) Tensor

Compute per-token classification logits for each sequence position.

BERTForTokenClassification.get_loss(labels: Tensor, input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None, *, ignore_index: int = -100, reduction: str | None = 'mean') Tensor

Compute token classification loss with optional ignored indices.

BERTForTokenClassification.predict_token_labels(input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None) Tensor

Return predicted token labels by argmax over class logits.

BERTForTokenClassification.get_accuracy(labels: Tensor, input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None, *, ignore_index: int = -100) Tensor

Compute token-level accuracy with optional ignored indices.

BERTForTokenClassification.get_loss_from_text(tokenizer: BERTTokenizerFast, text_a: str, text_b: str | None = None, labels: Tensor | None = None, *, device: Literal['cpu', 'gpu'] = 'cpu', ignore_index: int = -100, reduction: str | None = 'mean') Tensor

Compute token classification loss directly from raw text input.

BERTForTokenClassification.predict_token_labels_from_text(tokenizer: BERTTokenizerFast, text_a: str, text_b: str | None = None, *, device: Literal['cpu', 'gpu'] = 'cpu') Tensor

Predict token labels directly from raw text input.

Examples

>>> import lucid.models as models
>>> model = models.BERTForTokenClassification(
...     models.BERTConfig.base(add_pooling_layer=True),
...     num_labels=2,
... )
>>> print(model)
BERTForTokenClassification(...)
>>> logits = model(input_ids=input_ids, attention_mask=attention_mask)
>>> loss = model.get_loss(labels=labels, input_ids=input_ids, attention_mask=attention_mask)
>>> acc = model.get_accuracy(labels=labels, input_ids=input_ids, attention_mask=attention_mask)
>>> tokenizer = models.BERTTokenizerFast.from_pretrained(".data/bert/pretrained")
>>> pred = model.predict_token_labels_from_text(
...     tokenizer=tokenizer,
...     text_a="John lives in New York.",
...     device="gpu",
... )