Havelock Orality Models
BERT-based models for analyzing text on the oral–literate spectrum, operationalizing Walter Ong's framework from Orality and Literacy (1982).
Text Classification • Updated • 57Note Document-level regression model outputting a continuous 0–1 orality score.
HavelockAI/bert-marker-category
Text Classification • 0.3B • Updated • 42Note Binary span classifier: oral vs literate. Top level of the classification hierarchy.
HavelockAI/bert-marker-type
Text Classification • 0.3B • Updated • 49Note 25-class span classifier for functional marker families (e.g., repetition, subordination, direct_address). Mid-level of the classification hierarchy.
HavelockAI/bert-marker-subtype
Text Classification • 0.3B • Updated • 45Note 70+ class span classifier for fine-grained rhetorical devices (e.g., anaphora, epistemic_hedge, vocative). Finest level of the classification hierarchy.
HavelockAI/bert-token-classifier
Token Classification • Updated • 20Note BIO token tagger for span detection across 70+ marker types (145 labels). Identifies marker boundaries in running text. Uses focal loss for class imbalance.