RoFormerForQuestionAnswering¶
- class lucid.models.RoFormerForQuestionAnswering(config: RoFormerConfig)¶
The RoFormerForQuestionAnswering class predicts start and end logits for extractive question answering.
Class Signature¶
class RoFormerForQuestionAnswering(config: RoFormerConfig)
Parameters¶
config (RoFormerConfig): RoFormer configuration for token span prediction.
Methods¶
- RoFormerForQuestionAnswering.forward(input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None) tuple[Tensor, Tensor]
- RoFormerForQuestionAnswering.get_loss(start_positions: Tensor, end_positions: Tensor, input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None, *, reduction: str | None = 'mean') Tensor
- RoFormerForQuestionAnswering.predict_spans(input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None) tuple[Tensor, Tensor]
- RoFormerForQuestionAnswering.get_best_spans(input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None, *, max_answer_length: int = 30) tuple[Tensor, Tensor, Tensor]
- RoFormerForQuestionAnswering.get_accuracy(start_positions: Tensor, end_positions: Tensor, input_ids: LongTensor | None = None, attention_mask: Tensor | None = None, token_type_ids: LongTensor | None = None, position_ids: LongTensor | None = None, inputs_embeds: FloatTensor | None = None) Tensor
- RoFormerForQuestionAnswering.predict_spans_from_text(tokenizer: BERTTokenizerFast, question: str, context: str, *, device: Literal['cpu', 'gpu'] = 'cpu') tuple[Tensor, Tensor]
- RoFormerForQuestionAnswering.predict_answer_from_text(tokenizer: BERTTokenizerFast, question: str, context: str, *, device: Literal['cpu', 'gpu'] = 'cpu', max_answer_length: int = 30) str
Examples¶
>>> import lucid.models as models
>>> config = models.RoFormerConfig.base(vocab_size=50000)
>>> model = models.RoFormerForQuestionAnswering(config)
>>> print(model)
RoFormerForQuestionAnswering(...)