README / README.md
isakzhang's picture
Update README.md
68687f9 verified
metadata
title: README
emoji: 📉
colorFrom: red
colorTo: gray
sdk: static
pinned: false

SeaLLMs - Large Language Models for Southeast Asia

Welcome to the SeaLLMs project - a family of large language models tailored for Southeast Asian languages including English, Chinese, Indonesian, Vietnamese, Thai, Tagalog, Malay, Burmese, Khmer, Lao, Tamil, and Javanese.

Unlike models primarily designed for high-resource languages like English, our mission is to democratize access to advanced language technologies for regional and potentially under-represented languages. We aim to develop models capable of handling a variety of tasks in SEA languages, while prioritizing safety and trustworthiness within the regional context.

SeaLLMs Models

  • [🔥NEW🔥] SeaLLMs-v3 is here! Try the model from the demo!
    • SeaLLMs/SeaLLMs-v3-7B-Chat: Latest 7B chat version of SeaLLMs-v3, achieving SOTA performance of diverse tasks while specifically enhanced to be more trustworthy, exhibiting reduced hallucination and providing safe response.
    • SeaLLMs/SeaLLMs-v3-1.5B-Chat: Latest 1.5B chat version of SeaLLMs-v3, specifically fine-tuned to follow human instructions effectively. It is designed to be resource-efficient, making it suitable for use even on your laptop.
    • SeaLLMs/SeaLLMs-v3-1.5B and SeaLLMs/SeaLLMs-v3-7B: two base version models for you further conducting customized fine-tuning with your own data.
  • SeaLLMs/SeaLLM-7B-v2.5: New SeaLLM-7B model with 7B-SOTA on many world knowledge and reasoning tasks in SEA languages.
  • SeaLLMs/SeaLLM-7B-v2: The most significant upgrade since SeaLLM-13B with half the size, outperforming performance across diverse multilingual tasks, from world knowledge, math reasoning, instruction following, etc.
  • SeaLLMs/SeaLLM-13B-Chat: A chatbot optimized for Vietnamese 🇻🇳, Indonesian 🇮🇩, Thai 🇹🇭, Malay 🇲🇾, Khmer🇰🇭, Lao🇱🇦, Tagalog🇵🇭 and Burmese🇲🇲.

Multilingual Evaluations for SEA

  • SeaExam: assesses model performance using human-exam type benchmarks, reflecting the model's world knowledge and reasoning abilities.

Quick Links