Lim Chee Kin
limcheekin
AI & ML interests
On-device LLM Apps
Organizations
None yet
limcheekin's activity
DeepSeek-Coder-V2.5-Lite
13
#3 opened 29 days ago
by
smcleod
Deployment?
3
#24 opened 7 months ago
by
huggingface9837
You should try training a model with 2B parameters and context length 32000.
1
#3 opened 9 months ago
by
win10
Fantastic work guys!
2
#1 opened 9 months ago
by
dillfrescott
Free and ready to use zephyr-7B-beta-GGUF model as OpenAI API compatible endpoint
9
#3 opened 11 months ago
by
limcheekin
Free and ready to use rocket-3B-GGUF model as OpenAI API compatible endpoint
#4 opened 10 months ago
by
limcheekin
Free and ready to use rocket-3B-GGUF model as OpenAI API compatible endpoint
1
#5 opened 10 months ago
by
limcheekin
Free and ready to use gorilla-openfunctions-v1-GGUF model as OpenAI API compatible endpoint
#1 opened 10 months ago
by
limcheekin
Free and ready to use gorilla-openfunctions-v1-GGUF model as OpenAI API compatible endpoint
1
#3 opened 10 months ago
by
limcheekin
Free and ready to use neural-chat-7B-v3-1-GGUF model as OpenAI API compatible endpoint
#3 opened 11 months ago
by
limcheekin
Free and ready to use neural-chat-7B-v3-1-GGUF model as OpenAI API compatible endpoint
#6 opened 11 months ago
by
limcheekin
Free and ready to use Yi-6B-200K-GGUF model as OpenAI API compatible endpoint
#2 opened 11 months ago
by
limcheekin
Free and ready to use Yi-6B-200K-GGUF model as OpenAI API compatible endpoint
1
#2 opened 11 months ago
by
limcheekin
Free and ready to use OpenHermes-2.5-Mistral-7B-GGUF model as OpenAI API compatible endpoint
5
#2 opened 11 months ago
by
limcheekin
Free and ready to use deepseek-coder-6.7B-instruct-GGUF model as OpenAI API compatible endpoint
#2 opened 11 months ago
by
limcheekin
Free and ready to use deepseek-coder-6.7B-instruct-GGUF model as OpenAI API compatible endpoint
1
#3 opened 11 months ago
by
limcheekin
Free and ready to use OpenHermes-2.5-Mistral-7B-GGUF model as OpenAI API compatible endpoint
#1 opened 11 months ago
by
limcheekin
Free and ready to use openchat_3.5-GGUF model as OpenAI API compatible endpoint
1
#4 opened 11 months ago
by
limcheekin
Free and ready to use openchat_3.5-GGUF model as OpenAI API compatible endpoint
#7 opened 11 months ago
by
limcheekin
Free and ready to use zephyr-7B-beta-GGUF model as OpenAI API compatible endpoint
12
#2 opened 11 months ago
by
limcheekin
The tokenizer class you load from this checkpoint is 'GPT4Tokenizer'
1
#2 opened 12 months ago
by
limcheekin
Free and ready to use Mistral-7B-OpenOrca-GGUF model as OpenAI API compatible endpoint
2
#3 opened about 1 year ago
by
limcheekin
Ready to use Mistral-7B-Instruct-v0.1-GGUF model as OpenAI API compatible endpoint
13
#29 opened about 1 year ago
by
limcheekin
Free and ready to use Mistral-7B-OpenOrca-GGUF model as OpenAI API compatible endpoint
#6 opened about 1 year ago
by
limcheekin
Ready to use Mistral-7B-Instruct-v0.1-GGUF model as OpenAI API compatible endpoint
2
#2 opened about 1 year ago
by
limcheekin
Should this work with llama2.c from karpathy?
5
#2 opened about 1 year ago
by
JulianX4
License?
1
#1 opened about 1 year ago
by
limcheekin
Can the embeddings created by the model used with OpenAI completion or chatCompletion API?
1
#1 opened about 1 year ago
by
limcheekin
Any Plans for CT2 models of Llama 2-7B-Chat?
3
#1 opened about 1 year ago
by
0xSarkar
Any plan for int8?
4
#1 opened about 1 year ago
by
0xSarkar
Prompt format support
3
#2 opened about 1 year ago
by
limcheekin
Missing tokenizer.model file and other errors
1
#1 opened about 1 year ago
by
limcheekin
Update prompt format
8
#4 opened about 1 year ago
by
mike-ravkine
Model Fine-tuning and Optimized Prompt Templates
1
#2 opened about 1 year ago
by
limcheekin
Model Fine-tuning and Optimized Prompt Templates
1
#9 opened about 1 year ago
by
limcheekin
Update README.md
1
#1 opened over 1 year ago
by
Anthropoy
No correct pipeline_tag
7
#2 opened over 1 year ago
by
Anthropoy
Update README.md
#4 opened over 1 year ago
by
Anthropoy
Low quality in information retrieval for embeddings created by the model?
4
#1 opened over 1 year ago
by
limcheekin
generates non-sense response
9
#1 opened over 1 year ago
by
vasilee
int8 model consumes the same GPU memory as default model.
2
#15 opened over 1 year ago
by
Iamexperimenting
understanding about LLM
1
#14 opened over 1 year ago
by
Iamexperimenting