Titus von Koeller

Titus-von-Koeller

AI & ML interests

NN Quantization, Generative AI, LLMs, alignment, algorithms for social justice, ethical humanism, mitigating gender bias, audio compression, AGI

Recent Activity

liked a model about 1 month ago
TheBloke/Qwen-7B-Chat-AWQ

Articles

Organizations

Titus-von-Koeller's activity

reacted to osanseviero's post with 🔥 8 months ago
view post
Post
3513
Diaries of Open Source. Part 12 🤗

🚀Alibaba releases Qwen1.5-MoE-A2.7B, an interesting MoE with 2.7B activated parameters and 64 experts
Blog https://qwenlm.github.io/blog/qwen-moe/
Demo: Qwen/qwen1.5-MoE-A2.7B-Chat-demo
Models: https://hf.co/Qwen
GitHub: https://github.com/QwenLM/Qwen1.5

🎵VoiceCraft, SOTA speech editing and text to speech
GitHub: https://github.com/jasonppy/VoiceCraft
Model: pyp1/VoiceCraft

🐍 AI21Labs release Jamba, an SSM-Transformer, pretrained MoE which allows a large context window (256K) and high throughput
Blog https://www.ai21.com/blog/announcing-jamba
Model ai21labs/Jamba-v0.1

✨ Berkeley releases Starling-LM-7B, an RLHF-ed model, and -RM-34B, a Yi-based reward model very good for its size
Starling Beta: Nexusflow/Starling-LM-7B-beta
Starling RM: Nexusflow/Starling-RM-34B

🖥️Stability releases Stable Code Instruct 3B, an instruct model for code generation
Blog: https://stability.ai/news/introducing-stable-code-instruct-3b
Demo: stabilityai/stable-code-instruct-3b
Report: https://stability.ai/s/Stable_Code_TechReport_release.pdf

📚Common Corpus: the largest public domain dataset for training LLMs
Blog: https://hf.co/blog/Pclanglais/common-corpus
Dataset: https://hf.co/collections/PleIAs/common-corpus-65d46e3ea3980fdcd66a5613

Misc:
⚡GaLore: a very memory-efficient technique that allows pretraining models in consumer GPUs https://hf.co/blog/galore
Moirai
📈Moirai, foundation models for time series forecasting https://hf.co/collections/Salesforce/moirai-10-r-models-65c8d3a94c51428c300e0742
🔥 Mistral-ORPO-Capybara-7K, a high-quality Mistral fine-tune using ORPO, a new alignment technique kaist-ai/mistral-orpo-capybara-7k
🤯APISR, an anime super-resolution upscaling model HikariDawn/APISR
·
posted an update 8 months ago
view post
Post
1929
🔥 Level up your model training w/ GaLore + Transformers for SOTA results on consumer-grade hardware!

⬇️ 82.5% less optimizer state memory footprint without performance degradation by expressing the gradient weight matrix as low rank.

👩🏿‍💻 Install via pip install transformers>=4.39.0 galore-torch. #ProudlyGpuPoor

The integration of GaLore into the training of large language models (LLMs) marks a significant advancement in the field of deep learning, particularly in terms of memory efficiency and the democratization of AI research. By allowing for the training of billion-parameter models on consumer-grade hardware, reducing memory footprint in optimizer states, and leveraging advanced projection matrix techniques, GaLore opens new horizons for researchers and practitioners with limited access to high-end computational resources.

🔬 Find out more about GaLore and investigate lots of juicy technical details: https://huggingface.co/blog/galore

🤗 Huge thanks to everyone involved ❤️:

• authors: @jiaweizhao @Kyriection @beidic Zhangyang Wang @animakumar @tydsh
• community contributors: @hiyouga @mdouglas and others!
@ybelkada for taking such swift action in composing and coordinating necessary PRs to get this live at ⚡ speed!

🏗️📈 Super rewarding to see how @timdettmers work with optimizers is being built upon to achieve even greater heights!

🚧 Actually, there are ongoing works to integrate GaLore into bitsandbytes and optimize memory efficiency even further 💪. We'll keep you posted!
  • 1 reply
·
reacted to their post with 🤗 9 months ago
view post
Post
We just released bitsandbytes==0.43.0 📦 , with these significant new additions:

‣ 🛫 FSDP+QLoRA support (alpha release)
◦ now anyone with 2 powerful gaming GPUs can fine-tune 70B param models at home!
◦ in collab with Jeremy Howard + team @ answer.ai
◦ answer.ai blogpost: https://www.answer.ai/posts/2024-03-06-fsdp-qlora.html
◦ example repo: https://github.com/AnswerDotAI/fsdp_qlora/

‣ 🌈⊞ Official Windows support
◦ now via simple pip install bitsandbytes>=0.43.0

‣ 📄 Huge docs update:
https://huggingface.co/docs/bitsandbytes/main
◦ Be sure to check out the optimizers and the API docs
◦ ... even more upcoming ...

Under the hood there we have many other improvements, due to extensive maintenance activity, community contributions by super active + knowledgable volunteers ✨ 🚀 and the official sponsorship by Hugging Face that makes all this possible 🤗 ❤️ 🌍

We would greatly appreciate any further community contributions, be it to help with refactorings, exterminating flaky tests, writing doc-strings, tutorials, new features. Don't be shy, just contact us and we see where this leads us:
https://github.com/TimDettmers/bitsandbytes/discussions

Have a great weekend everyone!
  • 1 reply
·
posted an update 9 months ago
view post
Post
We just released bitsandbytes==0.43.0 📦 , with these significant new additions:

‣ 🛫 FSDP+QLoRA support (alpha release)
◦ now anyone with 2 powerful gaming GPUs can fine-tune 70B param models at home!
◦ in collab with Jeremy Howard + team @ answer.ai
◦ answer.ai blogpost: https://www.answer.ai/posts/2024-03-06-fsdp-qlora.html
◦ example repo: https://github.com/AnswerDotAI/fsdp_qlora/

‣ 🌈⊞ Official Windows support
◦ now via simple pip install bitsandbytes>=0.43.0

‣ 📄 Huge docs update:
https://huggingface.co/docs/bitsandbytes/main
◦ Be sure to check out the optimizers and the API docs
◦ ... even more upcoming ...

Under the hood there we have many other improvements, due to extensive maintenance activity, community contributions by super active + knowledgable volunteers ✨ 🚀 and the official sponsorship by Hugging Face that makes all this possible 🤗 ❤️ 🌍

We would greatly appreciate any further community contributions, be it to help with refactorings, exterminating flaky tests, writing doc-strings, tutorials, new features. Don't be shy, just contact us and we see where this leads us:
https://github.com/TimDettmers/bitsandbytes/discussions

Have a great weekend everyone!
  • 1 reply
·
reacted to their post with ❤️ 9 months ago
view post
Post
Exciting news for bitsandbytes! We're thrilled to announce the release of the initial version of our new documentation: https://huggingface.co/docs/bitsandbytes/main/en/index .

Please let us know what you think: Your feedback is essential to us, and we would greatly appreciate any insights you have on how we can further enhance it or even better be happy to merge your contributions, filling in some blanks: Especially doc-strings are still a big topic and there several placeholder that would be super helpful to have filled in. Please post your feedback here: https://github.com/TimDettmers/bitsandbytes/discussions/1090

Since taking over maintenance together with Younes Belkada and since Hugging Face graciously agreed to support the library, we've already made enormous strides and community contributions have sprung back to life: It's so motivating to have so many knowledgeable contributors that often invest extensive free-time and bring their unique ideas to the table.

A notable example are our ongoing efforts to enable cross-platform support, including Intel, Apple Silicon, AMD, and Windows. Simultaneously, we're working diligently to streamline community contributions in BNB, making the process more accessible for everyone. A heartfelt thank you to all who have contributed thus far!

With HuggingFace's committed to supporting bitsandbytes going forward, we're sure to promptly respond to and integrate additional community contributions.

Looking forward to growing bitsandbytes further as part of the FOSS community: pushing forward the state of the art in democratization of AI!
posted an update 9 months ago
view post
Post
Exciting news for bitsandbytes! We're thrilled to announce the release of the initial version of our new documentation: https://huggingface.co/docs/bitsandbytes/main/en/index .

Please let us know what you think: Your feedback is essential to us, and we would greatly appreciate any insights you have on how we can further enhance it or even better be happy to merge your contributions, filling in some blanks: Especially doc-strings are still a big topic and there several placeholder that would be super helpful to have filled in. Please post your feedback here: https://github.com/TimDettmers/bitsandbytes/discussions/1090

Since taking over maintenance together with Younes Belkada and since Hugging Face graciously agreed to support the library, we've already made enormous strides and community contributions have sprung back to life: It's so motivating to have so many knowledgeable contributors that often invest extensive free-time and bring their unique ideas to the table.

A notable example are our ongoing efforts to enable cross-platform support, including Intel, Apple Silicon, AMD, and Windows. Simultaneously, we're working diligently to streamline community contributions in BNB, making the process more accessible for everyone. A heartfelt thank you to all who have contributed thus far!

With HuggingFace's committed to supporting bitsandbytes going forward, we're sure to promptly respond to and integrate additional community contributions.

Looking forward to growing bitsandbytes further as part of the FOSS community: pushing forward the state of the art in democratization of AI!