Results discussion

#6
by vince62s - opened

I scored wmt24 EN-DE with this model.

Beam: 1 => Cometkiwi-XL: 67.24
Beam: 4 => Cometkiwi-XL: 67.88
This has been done without splitting segments. (997 segments)
I tested DeepL: 68.92

There is quite a difference with the 70B model.

Also I tried to fine-tune the model for EN-DE only but as soon as I fine-tune (using NT14-21), it degrades the performance. I am wondering if the fact that the model has been annealed with a low learning rate makes it difficult to fine-tune even further.
EDIT: After re-reading Ricardo's comment on my other post, it probably overfits to a unique prompt.

image.png

Unbabel org
edited Oct 14

Hi @vince62s the 70B model and this model are not really comparable. This TowerMistral is basically the same training that we did with LLama 2 (same continuous pretraining + SFT data). The 70B model we submitted to the shared task had better pretraining and SFT data.

They are not meant to be comparable. This model was released to address some of the reviews we got on the Tower paper. The models from the shared task are not released yet.

Another thing, is that our submission to WMT24 uses MBR with COMET 22 for the final submission.

I got it, but I expected this model to be in the 69ish range. Anyway I'll play more with it to boost it a bit.

still struggling a bit, can you guys share the learning rate you used for instruction finetuning with TowerBlocks ?

is this the right info:

image.png

Sign up or log in to comment