chizhikchi/CARES
Viewer • Updated • 3.22k • 32 • 5
How to use IIC/roberta-large-bne-caresA with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="IIC/roberta-large-bne-caresA") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("IIC/roberta-large-bne-caresA")
model = AutoModelForSequenceClassification.from_pretrained("IIC/roberta-large-bne-caresA")This model is a finetuned version of roberta-large-bne for the Cares Area dataset used in a benchmark in the paper A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks. The model has a F1 of 0.992
Please refer to the original publication for more information.
| parameter | Value |
|---|---|
| batch size | 32 |
| learning rate | 4e-05 |
| classifier dropout | 0.2 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}