Mingxing Li's picture
1 3

Mingxing Li

MingxingLi

AI & ML interests

None yet

Recent Activity

Reacted to reach-vb's post with ๐Ÿ”ฅ 5 days ago
What a brilliant week for Open Source AI! Qwen 2.5 Coder by Alibaba - 0.5B / 1.5B / 3B / 7B / 14B/ 32B (Base + Instruct) Code generation LLMs, with 32B tackling giants like Gemnini 1.5 Pro, Claude Sonnet https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f LLM2CLIP from Microsoft - Leverage LLMs to train ultra-powerful CLIP models! Boosts performance over the previous SOTA by ~17% https://huggingface.co/collections/microsoft/llm2clip-672323a266173cfa40b32d4c Athene v2 Chat & Agent by NexusFlow - SoTA general LLM fine-tuned from Qwen 2.5 72B excels at Chat + Function Calling/ JSON/ Agents https://huggingface.co/collections/Nexusflow/athene-v2-6735b85e505981a794fb02cc Orca Agent Instruct by Microsoft - 1 million instruct pairs covering text editing, creative writing, coding, reading comprehension, etc - permissively licensed https://huggingface.co/datasets/microsoft/orca-agentinstruct-1M-v1 Ultravox by FixieAI - 70B/ 8B model approaching GPT4o level, pick any LLM, train an adapter with Whisper as Audio Encoder https://huggingface.co/collections/reach-vb/ultravox-audio-language-model-release-67373b602af0a52b2a88ae71 JanusFlow 1.3 by DeepSeek - Next iteration of their Unified MultiModal LLM Janus with RectifiedFlow https://huggingface.co/deepseek-ai/JanusFlow-1.3B Common Corpus by Pleais - 2,003,039,184,047 multilingual, commercially permissive and high quality tokens! https://huggingface.co/datasets/PleIAs/common_corpus I'm sure I missed a lot, can't wait for the next week! Put down in comments what I missed! ๐Ÿค—
View all activity

Organizations

None yet

MingxingLi's activity

Reacted to reach-vb's post with ๐Ÿ”ฅ 5 days ago
view post
Post
3997
What a brilliant week for Open Source AI!

Qwen 2.5 Coder by Alibaba - 0.5B / 1.5B / 3B / 7B / 14B/ 32B (Base + Instruct) Code generation LLMs, with 32B tackling giants like Gemnini 1.5 Pro, Claude Sonnet
Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f

LLM2CLIP from Microsoft - Leverage LLMs to train ultra-powerful CLIP models! Boosts performance over the previous SOTA by ~17%
microsoft/llm2clip-672323a266173cfa40b32d4c

Athene v2 Chat & Agent by NexusFlow - SoTA general LLM fine-tuned from Qwen 2.5 72B excels at Chat + Function Calling/ JSON/ Agents
Nexusflow/athene-v2-6735b85e505981a794fb02cc

Orca Agent Instruct by Microsoft - 1 million instruct pairs covering text editing, creative writing, coding, reading comprehension, etc - permissively licensed
microsoft/orca-agentinstruct-1M-v1

Ultravox by FixieAI - 70B/ 8B model approaching GPT4o level, pick any LLM, train an adapter with Whisper as Audio Encoder
reach-vb/ultravox-audio-language-model-release-67373b602af0a52b2a88ae71

JanusFlow 1.3 by DeepSeek - Next iteration of their Unified MultiModal LLM Janus with RectifiedFlow
deepseek-ai/JanusFlow-1.3B

Common Corpus by Pleais - 2,003,039,184,047 multilingual, commercially permissive and high quality tokens!
PleIAs/common_corpus

I'm sure I missed a lot, can't wait for the next week!

Put down in comments what I missed! ๐Ÿค—
Reacted to fdaudens's post with ๐Ÿš€ 5 days ago
view post
Post
1558
๐Ÿš€ @Qwen just dropped 2.5-Turbo!

1M token context (that's entire "War and Peace"!) + 4.3x faster processing speed. Same price, way more power ๐Ÿ”ฅ

Check out the demo: Qwen/Qwen2.5-Turbo-1M-Demo

#QWEN
Reacted to fffiloni's post with ๐Ÿ”ฅ 10 days ago
upvoted an article 24 days ago
view article
Article

Accelerating LLM Inference: Fast Sampling with Gumbel-Max Trick

By cxdu โ€ข
โ€ข 8
liked a Space 5 months ago