metadata
tags:
- sentence-transformers
- cross-encoder
- reranker
- generated_from_trainer
- dataset_size:1485
- loss:BinaryCrossEntropyLoss
base_model: cross-encoder/ms-marco-MiniLM-L12-v2
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- accuracy
- accuracy_threshold
- f1
- f1_threshold
- precision
- recall
- average_precision
model-index:
- name: CrossEncoder based on cross-encoder/ms-marco-MiniLM-L12-v2
results:
- task:
type: cross-encoder-classification
name: Cross Encoder Classification
dataset:
name: compliance eval
type: compliance-eval
metrics:
- type: accuracy
value: 0.9636363636363636
name: Accuracy
- type: accuracy_threshold
value: -1.7519245147705078
name: Accuracy Threshold
- type: f1
value: 0.9662921348314608
name: F1
- type: f1_threshold
value: -2.8691844940185547
name: F1 Threshold
- type: precision
value: 0.9555555555555556
name: Precision
- type: recall
value: 0.9772727272727273
name: Recall
- type: average_precision
value: 0.9939968601076801
name: Average Precision
CrossEncoder based on cross-encoder/ms-marco-MiniLM-L12-v2
This is a Cross Encoder model finetuned from cross-encoder/ms-marco-MiniLM-L12-v2 using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
Model Details
Model Description
- Model Type: Cross Encoder
- Base model: cross-encoder/ms-marco-MiniLM-L12-v2
- Maximum Sequence Length: 512 tokens
- Number of Output Labels: 1 label
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Cross Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Cross Encoders on Hugging Face
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("cross_encoder_model_id")
# Get scores for pairs of texts
pairs = [
['the system must identify any risk profile that has expired and is currently marked as overdue to ensure ongoing suitability compliance.', "so, like, your portfolio risk profile is out of date, and i've got a flag here saying it needs renewal before we can do any new trades."],
['to identify risk misalignment trades, the system must flag a risk mismatch whenever the product risk rating exceeds the client risk profile.', "so, it's a solid choice, but i gotta mention, there's a bit of a risk mismatch between the fund's rating and your own suitability score, so it's a bit of a hurdle."],
['the system identifies an execution only wrapper when the order initiation confirms that this trade is performed on an execution only basis with no advice given.', "so... uh... let's just do it, but it's execution only, you know? no advice was provided, so you're on your own with the strategy on this one, i'm so rushed."],
['the system must identify any risk profile that has expired and is currently marked as overdue to ensure ongoing suitability compliance.', "hey, um, checking the dashboard here and it says your prp is overdue, you know, we haven't updated it in a bit and it's flagged."],
['to identify risk misalignment trades, the system must flag a risk mismatch whenever the product risk rating exceeds the client risk profile.', "don't worry about the specifics right now the main thing is getting the allocation because it's oversubscribed so can i confirm the trade"],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'the system must identify any risk profile that has expired and is currently marked as overdue to ensure ongoing suitability compliance.',
[
"so, like, your portfolio risk profile is out of date, and i've got a flag here saying it needs renewal before we can do any new trades.",
"so, it's a solid choice, but i gotta mention, there's a bit of a risk mismatch between the fund's rating and your own suitability score, so it's a bit of a hurdle.",
"so... uh... let's just do it, but it's execution only, you know? no advice was provided, so you're on your own with the strategy on this one, i'm so rushed.",
"hey, um, checking the dashboard here and it says your prp is overdue, you know, we haven't updated it in a bit and it's flagged.",
"don't worry about the specifics right now the main thing is getting the allocation because it's oversubscribed so can i confirm the trade",
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
Evaluation
Metrics
Cross Encoder Classification
- Dataset:
compliance-eval - Evaluated with
CrossEncoderClassificationEvaluator
| Metric | Value |
|---|---|
| accuracy | 0.9636 |
| accuracy_threshold | -1.7519 |
| f1 | 0.9663 |
| f1_threshold | -2.8692 |
| precision | 0.9556 |
| recall | 0.9773 |
| average_precision | 0.994 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 1,485 training samples
- Columns:
sentence1,sentence2, andlabel - Approximate statistics based on the first 1000 samples:
sentence1 sentence2 label type string string float details - min: 135 characters
- mean: 302.95 characters
- max: 725 characters
- min: 97 characters
- mean: 179.3 characters
- max: 463 characters
- min: 0.0
- mean: 0.49
- max: 1.0
- Samples:
sentence1 sentence2 label the rm must use the instrument_code to identify the soft lock disclosure and inform the client that 'this fund has a soft lock-up duration of xx months. you will be subjected to an early redemption charge of x% by the fund house if you were to redeem the fund within the soft lock-up period.' and, if applicable, that 'the fund is currently still within the soft lock-up period. should you wish to proceed with the redemption, you will incur an early redemption charge of x% by the fund house.'there's a bit of a soft lock on this one, you know, if you take the money out too soon there's a small charge, but it's no big deal.0.0the system identifies an execution only wrapper when the order initiation confirms that this trade is performed on an execution only basis with no advice given.i can't believe how expensive flights have become lately, it's just ridiculous. let's just go ahead with that stock buy, i'll put it through as we discussed earlier, it’s a simple execution for us.0.0for a client initiated (ci) wrapper where the order initiation is 'client initiated', the bank must confirm that 'this trade is based on your initiated interest in underlying and product type' or 'this trade is based on your initiated interest in underlying or product type'.exactly, i-i see what you mean, and since you're the one who initiated this conversation about the emerging markets fund, i'll just log that as your interest. did you ever get that classic car fixed up?1.0 - Loss:
BinaryCrossEntropyLosswith these parameters:{ "activation_fn": "torch.nn.modules.linear.Identity", "pos_weight": null }
Evaluation Dataset
Unnamed Dataset
- Size: 165 evaluation samples
- Columns:
sentence1,sentence2, andlabel - Approximate statistics based on the first 165 samples:
sentence1 sentence2 label type string string float details - min: 135 characters
- mean: 302.44 characters
- max: 725 characters
- min: 97 characters
- mean: 178.02 characters
- max: 631 characters
- min: 0.0
- mean: 0.53
- max: 1.0
- Samples:
sentence1 sentence2 label the system must identify any risk profile that has expired and is currently marked as overdue to ensure ongoing suitability compliance.so, like, your portfolio risk profile is out of date, and i've got a flag here saying it needs renewal before we can do any new trades.1.0to identify risk misalignment trades, the system must flag a risk mismatch whenever the product risk rating exceeds the client risk profile.so, it's a solid choice, but i gotta mention, there's a bit of a risk mismatch between the fund's rating and your own suitability score, so it's a bit of a hurdle.1.0the system identifies an execution only wrapper when the order initiation confirms that this trade is performed on an execution only basis with no advice given.so... uh... let's just do it, but it's execution only, you know? no advice was provided, so you're on your own with the strategy on this one, i'm so rushed.1.0 - Loss:
BinaryCrossEntropyLosswith these parameters:{ "activation_fn": "torch.nn.modules.linear.Identity", "pos_weight": null }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsper_device_train_batch_size: 16per_device_eval_batch_size: 16learning_rate: 2e-05warmup_ratio: 0.1load_best_model_at_end: True
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 3max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthproject: huggingfacetrackio_space_id: trackioddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: noneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Trueprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}
Training Logs
| Epoch | Step | Training Loss | Validation Loss | compliance-eval_average_precision |
|---|---|---|---|---|
| 0.1075 | 10 | 1.9119 | 1.1985 | 0.6783 |
| 0.2151 | 20 | 0.9675 | 1.0970 | 0.6914 |
| 0.3226 | 30 | 0.7458 | 0.4725 | 0.8480 |
| 0.4301 | 40 | 0.5308 | 0.4431 | 0.8849 |
| 0.5376 | 50 | 0.3888 | 0.4183 | 0.9097 |
| 0.6452 | 60 | 0.3477 | 0.3472 | 0.9325 |
| 0.7527 | 70 | 0.3082 | 0.3005 | 0.9524 |
| 0.8602 | 80 | 0.3364 | 0.2682 | 0.9647 |
| 0.9677 | 90 | 0.3069 | 0.2345 | 0.9804 |
| 1.0753 | 100 | 0.2636 | 0.1847 | 0.9886 |
| 1.1828 | 110 | 0.2577 | 0.1793 | 0.9847 |
| 1.2903 | 120 | 0.1793 | 0.1940 | 0.9826 |
| 1.3978 | 130 | 0.19 | 0.2333 | 0.9794 |
| 1.5054 | 140 | 0.1788 | 0.1615 | 0.9858 |
| 1.6129 | 150 | 0.1277 | 0.1576 | 0.9862 |
| 1.7204 | 160 | 0.1851 | 0.1399 | 0.9903 |
| 1.8280 | 170 | 0.1652 | 0.1056 | 0.9947 |
| 1.9355 | 180 | 0.085 | 0.1077 | 0.9949 |
| 2.043 | 190 | 0.1111 | 0.0943 | 0.9955 |
| 2.1505 | 200 | 0.09 | 0.1137 | 0.9955 |
| 2.2581 | 210 | 0.1136 | 0.1222 | 0.9934 |
| 2.3656 | 220 | 0.0703 | 0.1155 | 0.9937 |
| 2.4731 | 230 | 0.0866 | 0.1147 | 0.9935 |
| 2.5806 | 240 | 0.1104 | 0.1089 | 0.9943 |
| 2.6882 | 250 | 0.1523 | 0.1141 | 0.9940 |
| 2.7957 | 260 | 0.1189 | 0.1297 | 0.9943 |
| 2.9032 | 270 | 0.0479 | 0.1365 | 0.9940 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.12.12
- Sentence Transformers: 5.2.0
- Transformers: 4.57.3
- PyTorch: 2.9.0+cu126
- Accelerate: 1.12.0
- Datasets: 4.0.0
- Tokenizers: 0.22.2
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}