metadata
dataset_info:
features:
- name: query_id
dtype: string
- name: corpus_id
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 675736
num_examples: 24927
- name: valid
num_bytes: 39196
num_examples: 1400
- name: test
num_bytes: 35302
num_examples: 1261
download_size: 316865
dataset_size: 750234
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
Employing the CoIR evaluation framework's dataset version, utilize the code below for assessment:
import coir
from coir.data_loader import get_tasks
from coir.evaluation import COIR
from coir.models import YourCustomDEModel
model_name = "intfloat/e5-base-v2"
# Load the model
model = YourCustomDEModel(model_name=model_name)
# Get tasks
#all task ["codetrans-dl","stackoverflow-qa","apps","codefeedback-mt","codefeedback-st","codetrans-contest","synthetic-
# text2sql","cosqa","codesearchnet","codesearchnet-ccr"]
tasks = get_tasks(tasks=["codetrans-dl"])
# Initialize evaluation
evaluation = COIR(tasks=tasks,batch_size=128)
# Run evaluation
results = evaluation.run(model, output_folder=f"results/{model_name}")
print(results)