|
--- |
|
license: llama2 |
|
language: |
|
- en |
|
- nl |
|
tags: |
|
- academic |
|
- university |
|
--- |
|
# Model Card for the Erasmian Language Model |
|
|
|
ELM is a community driven large language model tailored to the research and education needs of Erasmus University (EUR, Netherlands) students and staff. |
|
|
|
The model draws inspiration from ChatGPT and Llama in terms of architecture, but it aims to be privacy sensitive, environmentally conscious, and from and for the Erasmus community. Here are a few key points of ELM: |
|
|
|
We hope that the ELM experience becomes a template for community driven, decentralized and purpuseful AI development and application. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
The underlying language model is trained and fine-tuned on academic outputs from Erasmus University, such as scientific papers or student theses; |
|
Training and fine-tuning the model is a joint effort of students and staff, transparent for all parties involved; |
|
The prompt-response examples used to fine tune the model come from students and staff, not crowdsourcing services; |
|
Defining what is the "better" model output also comes from the perspective of research and education. |
|
The true richness of ELM lies in the way its training data is generated. What is the "state-of-the-art" model may change quickly, but quality data will maintain its relevance and ensure that ELM and its future iterations serve the needs of the community that nurtured it. |
|
|
|
|
|
|
|
- **Developed by:** João Gonçalves, Nick Jelicic |
|
- **Funded by [optional]:** Convergence AI and Digitalization, Erasmus Trustfonds |
|
- **Model type:** Llama-2 Instruct |
|
- **Language(s) (NLP):** English, Dutch |
|
- **License:** Llama2 |
|
|
|
### Model Sources |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** https://github.com/Joaoffg/ELM |
|
- **Paper:** https://arxiv.org/abs/2408.06931 |
|
- **Demo:** https://huggingface.co/spaces/Joaoffg/Joaoffg-ELM |
|
|
|
|