-
nikravan/glm-4vq
Document Question Answering • Updated • 1.64k • 23 -
deepseek-ai/deepseek-coder-33b-instruct
Text Generation • Updated • 28.2k • 457 -
DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence
Paper • 2401.14196 • Published • 47 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14
Collections
Discover the best community collections!
Collections including paper arxiv:1910.01108
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11
-
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14 -
distilbert/distilbert-base-uncased-finetuned-sst-2-english
Text Classification • Updated • 9.2M • 614 -
FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design
Paper • 2401.14112 • Published • 18 -
GPT-4V(ision) is a Human-Aligned Evaluator for Text-to-3D Generation
Paper • 2401.04092 • Published • 21
-
SMOTE: Synthetic Minority Over-sampling Technique
Paper • 1106.1813 • Published • 1 -
Scikit-learn: Machine Learning in Python
Paper • 1201.0490 • Published • 1 -
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
Paper • 1406.1078 • Published -
Distributed Representations of Sentences and Documents
Paper • 1405.4053 • Published
-
Mistral 7B
Paper • 2310.06825 • Published • 47 -
BloombergGPT: A Large Language Model for Finance
Paper • 2303.17564 • Published • 20 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14
-
Llemma: An Open Language Model For Mathematics
Paper • 2310.10631 • Published • 50 -
Mistral 7B
Paper • 2310.06825 • Published • 47 -
Qwen Technical Report
Paper • 2309.16609 • Published • 34 -
BTLM-3B-8K: 7B Parameter Performance in a 3B Parameter Model
Paper • 2309.11568 • Published • 10