Post
2969
š Celebrating One Year of #SauerkrautLM with Two Groundbreaking Releases!
We're thrilled to announce the release of SauerkrautLM-v2-14b in two specialized versions: VAGOsolutions/SauerkrautLM-v2-14b-SFT and VAGOsolutions/SauerkrautLM-v2-14b-DPO. Built on the robust Qwen2.5-14B foundation, these models represent a significant leap forward in multilingual AI capabilities.
š¬ Technical Breakthroughs:
š Innovative three-phase Fine-Tuning approach
š Two-step Spectrum SFT + one-step Spectrum DPO optimization phase for enhanced performance
š Balance of German and English language capabilities
š Advanced function calling - almost on par with Claude-3.5-Sonnet-20240620
š©šŖ German Language Excellence:
What sets this release apart is our unique achievement in simultaneously improving both German and English capabilities. Through our specialized training approach with over 1.2B tokens across two phases, we've managed to:
š Enhance German language understanding and generation (SFT Version > DPO Version)
š Maintain authentic German linguistic nuances
š Improve cross-lingual capabilities
š Preserve cultural context awareness
š Training Innovation:
Our three-phase approach targeted specific layer percentages (15%, 20% and 25%) with carefully curated datasets, including:
š Mathematics-focused content (proprietary classifier-selected)
š High-quality German training data
š Specialized function calling datasets
š Premium multilingual content
š Community Contribution:
We're also releasing two new datasets in a few days:
1ļøā£ SauerkrautLM-Fermented-GER-DPO: 3,300 high-quality German training samples
2ļøā£ SauerkrautLM-Fermented-Irrelevance-GER-DPO: 2,000 specialized samples for optimized function call irrelevance handling
Thank you to our incredible community and partners who have supported us throughout this journey. Here's to another year of AI innovation!Ā š
We're thrilled to announce the release of SauerkrautLM-v2-14b in two specialized versions: VAGOsolutions/SauerkrautLM-v2-14b-SFT and VAGOsolutions/SauerkrautLM-v2-14b-DPO. Built on the robust Qwen2.5-14B foundation, these models represent a significant leap forward in multilingual AI capabilities.
š¬ Technical Breakthroughs:
š Innovative three-phase Fine-Tuning approach
š Two-step Spectrum SFT + one-step Spectrum DPO optimization phase for enhanced performance
š Balance of German and English language capabilities
š Advanced function calling - almost on par with Claude-3.5-Sonnet-20240620
š©šŖ German Language Excellence:
What sets this release apart is our unique achievement in simultaneously improving both German and English capabilities. Through our specialized training approach with over 1.2B tokens across two phases, we've managed to:
š Enhance German language understanding and generation (SFT Version > DPO Version)
š Maintain authentic German linguistic nuances
š Improve cross-lingual capabilities
š Preserve cultural context awareness
š Training Innovation:
Our three-phase approach targeted specific layer percentages (15%, 20% and 25%) with carefully curated datasets, including:
š Mathematics-focused content (proprietary classifier-selected)
š High-quality German training data
š Specialized function calling datasets
š Premium multilingual content
š Community Contribution:
We're also releasing two new datasets in a few days:
1ļøā£ SauerkrautLM-Fermented-GER-DPO: 3,300 high-quality German training samples
2ļøā£ SauerkrautLM-Fermented-Irrelevance-GER-DPO: 2,000 specialized samples for optimized function call irrelevance handling
Thank you to our incredible community and partners who have supported us throughout this journey. Here's to another year of AI innovation!Ā š