Roberta-based -

print(probs) # [negative, neutral, positive] Think of RoBERTa as a pre-trained brain for understanding English text. A RoBERTa-based model = that brain + a small task-specific head + fine-tuning on your data. 🧠 RoBERTa learns how language works . 🎯 Fine-tuning learns what you care about (spam vs. not spam, positive vs. negative, etc.). If you see “RoBERTa-based” in a paper or library, it almost always means: “We took RoBERTa and adapted it to our specific problem – and you can too.”


Copyright (c) 2023 Consilium Medicum

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
 

Address of the Editorial Office:

  • Alabyan Street, 13/1, Moscow, 127055, Russian Federation

Correspondence address:

  • Alabyan Street, 13/1, Moscow, 127055, Russian Federation

Managing Editor:

  • Tel.: +7 (926) 905-41-26
  • E-mail: e.gorbacheva@ter-arkhiv.ru

 

© 2018-2021 "Consilium Medicum" Publishing house