title
stringlengths 4
172
| link
stringlengths 27
86
| article
stringlengths 4
40.1k
|
---|---|---|
Gambling In The Probability Space | https://hf.co/blog/TuringsSolutions/gambling-the-probability-space | 8. References |
Taxonomy Completion with Embedding Quantization and an LLM-based Pipeline: A Case Study in Computational Linguistics | https://hf.co/blog/dcarpintero/taxonomy-completion | Resources |
How to Optimize TTFT of 8B LLMs with 1M Tokens to 20s | https://hf.co/blog/iofu728/ttft-1m-20s | How to Optimize KV Cache in Decoding? |
Create a Diffusers-compatible Dataset for Stable Diffusion Fine-tuning | https://hf.co/blog/nroggendorff/create-diffusers-dataset | Step 4. 🎉 You Did It! 🎉 (<strong>finally</strong>) |
Bringing Open-Source Models to Spreadsheets 🚀 | https://hf.co/blog/fdaudens/hugging-face-on-sheets | What's Next? |
Introducing HelpingAI-Flash: Emotionally Intelligent Conversational AI for All Devices | https://hf.co/blog/Abhaykoul/helpingai-flash | Conclusion |
Introduction to State Space Models (SSM) | https://hf.co/blog/lbourdois/get-on-the-ssm-train | <span style="color: #FF0000"> <strong>References</strong> </span> |
Announcing Finance Commons and the Bad Data Toolbox: Pioneering Open Data and Advanced Document Processing | https://hf.co/blog/Pclanglais/finance-commons-bad-data-toolbox | Will we solve PDF parsing before AGI? |
Mixedbread 🤝 deepset: Announcing our New German/English Embedding Model | https://hf.co/blog/shadeMe/deepset-mixedbread-new-german-embedding-model | Use it with the Mixedbread Embedders |
Swarm Neural Networks (SNN) for Image Generation | https://hf.co/blog/TuringsSolutions/snndiffusion | References |
Querying Datasets with the Datasets Explorer Chrome Extension | https://hf.co/blog/cfahlgren1/querying-datasets-with-sql-in-the-browser | It's Open Source 🤗 |
Deploy hundreds of open source models on one GPU using LoRAX | https://hf.co/blog/macadeliccc/deploy-hundreds-of-models-on-one-gpu | Citations |
Structured Harm Reporting in AI: New Research Paper at AIES and DEFCON event! | https://hf.co/blog/evijit/coordinatedflaws-aies-defcon | Looking Ahead |
Unleash ML Power on iOS: Apple Silicon Optimization Secrets | https://hf.co/blog/fguzman82/coreml-async-batch-prediction | References |
How OpenGPT 4o works | https://hf.co/blog/KingNish/opengpt-4o-working | Conclusion |
Market Research using AI Evolutionary Algorithms and Multimodal Regression | https://hf.co/blog/tonyassi/market-research-ai | About Me |
Introducing Ghost 8B Beta: A Game-Changing Language Model | https://hf.co/blog/lamhieu/introducing-ghost-8b-beta-a-game-changing-language |
<p>
<strong>Ghost 8B Beta</strong>, a groundbreaking language model, is poised to revolutionize the field of natural language processing. Developed with a focus on exceptional multilingual capabilities, superior knowledge acquisition, and cost-effectiveness, this model promises to unlock a new era of AI-powered applications.</p>
<p><strong>Exceptional Multilingual Support:</strong> Ghost 8B Beta boasts support for a diverse range of languages, enabling seamless communication and understanding across global contexts. This feature empowers developers to create applications that transcend language barriers, fostering inclusivity and accessibility.</p>
<p><strong>Superior Knowledge Capabilities:</strong> The model's exceptional knowledge base is built upon a robust foundation of training data, equipping it with the ability to comprehend complex concepts, answer intricate questions, and provide insightful analysis. This unparalleled knowledge prowess opens up exciting possibilities for advanced information retrieval, personalized recommendations, and intelligent decision-making.</p>
<p><strong>Cost-Effectiveness:</strong> Ghost 8B Beta is designed to be cost-efficient, making it accessible to a wider range of organizations and individuals. This affordability allows for the widespread adoption of AI-powered solutions, democratizing access to transformative technologies and fostering innovation across various industries.</p>
<p><strong>Explore the Potential:</strong></p>
<p>To learn more about this groundbreaking language model, visit the official website: <a href="https://ghost-x.org/docs/models/ghost-8b-beta/" rel="nofollow">Ghost 8B Beta</a> or explore the online demo platforms:</p>
<ul>
<li><strong>Ghost 8B Beta (β, 8k) on Spaces:</strong> <a href="https://huggingface.co/spaces/lamhieu/ghost-8b-beta-8k">https://huggingface.co/spaces/lamhieu/ghost-8b-beta-8k</a>.</li>
<li><strong>Ghost 8B Beta (β, 128k) on Spaces:</strong> <a href="https://huggingface.co/spaces/lamhieu/ghost-8b-beta-128k">https://huggingface.co/spaces/lamhieu/ghost-8b-beta-128k</a>.</li>
</ul>
<p><strong>Stay Connected:</strong></p>
<p>Stay updated with the latest information and engage with the Ghost X community by following them on Twitter: <a href="https://x.com/ghostx_ai" rel="nofollow">@ghostx_ai</a> and visiting their official website: <a href="https://ghost-x.org" rel="nofollow">ghost-x.org</a>.</p>
<p>This model represents a significant step forward in the field of language modeling, offering a powerful tool for developers and researchers alike to create innovative solutions that benefit society as a whole.</p>
<p><strong>Halôo:</strong></p>
<p>Welcome to the multilingual world! 🌎</p>
<p><strong>🇺🇸 English:</strong> Hello there! I’m Ghost 8B Beta, a friendly AI assistant ready to help you navigate the world of language. What’s the most popular fast food chain in the US? 🍔 (Because who doesn’t love a good burger?)</p>
<p><strong>🇫🇷 French:</strong> Bonjour! Je suis Ghost 8B Beta, un assistant IA amical prêt à vous aider à explorer le monde des langues. Que pensez-vous de la cuisine française? 🍽 (Parce que la cuisine française est un art!)</p>
<p><strong>🇮🇹 Italian:</strong> Ciao! Sono Ghost 8B Beta, un assistente AI amichevole pronto a aiutarvi a esplorare il mondo delle lingue. Qual è la cosa più divertente da fare a Roma? 🏰 (Perché Roma è una città ricca di storia e cultura!)</p>
<p><strong>🇪🇸 Spanish:</strong> Hola! Soy Ghost 8B Beta, un asistente IA amigable listo para ayudarte a explorar el mundo de los idiomas. ¿Qué te parece el flamenco? 🎶 (Porque el flamenco es una forma de arte vibrante y emocional!)</p>
<p><strong>🇵🇹 Portuguese:</strong> Olá! Sou Ghost 8B Beta, um assistente IA amigável pronto para ajudar você a explorar o mundo das línguas. O que você acha do futebol brasileiro? 🏟️ (Porque o futebol brasileiro é uma paixão nacional!)</p>
<p><strong>🇩🇪 German:</strong> Hallo! Ich bin Ghost 8B Beta, ein freundlicher AI-Assistent, der Ihnen beim Erkunden der Welt der Sprachen helfen möchte. Was ist das beliebteste Getränk in Deutschland? 🍻 (Weil das Bier in Deutschland eine Kultur ist!)</p>
<p><strong>🇻🇳 Vietnamese:</strong> Xin chào! Tôi là Ghost 8B Beta, một trợ lý AI thân thiện muốn giúp bạn khám phá thế giới ngôn ngữ. Bạn có thích ăn phở không? 🍜 (Vì phở là một món ăn đặc trưng của Việt Nam!)</p>
<p><strong>🇰🇷 Korean:</strong> 안녕하세요! 저는 Ghost 8B Beta, 친절한 AI 어시스턴트입니다. 언어 세계를 탐험하는 데 도움을 드릴게요. 한국의 대표적인 음식은 무엇인가요? 🍜 (한국 음식은 맛과 향이 풍부해서 꼭 한번 맛보세요!)</p>
<p><strong>🇨🇳 Chinese:</strong> 你好! 我是 Ghost 8B Beta,一位友好的 AI 助手,欢迎您来到语言的世界。您觉得中国菜怎么样? 🍜 (中国菜种类繁多,口味丰富,相信您一定会喜欢!)</p>
<p><strong>Screenshots of the model:</strong></p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/600ae38cc92b79f54efd4556/qLJjr150sy05Aw5nUOi9B.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/600ae38cc92b79f54efd4556/qLJjr150sy05Aw5nUOi9B.png"/></a></p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/600ae38cc92b79f54efd4556/2RHGCR87tZcFZKatPIblA.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/600ae38cc92b79f54efd4556/2RHGCR87tZcFZKatPIblA.png"/></a></p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/600ae38cc92b79f54efd4556/qOz5geTBXLv-3eTKJIGAV.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/600ae38cc92b79f54efd4556/qOz5geTBXLv-3eTKJIGAV.png"/></a></p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/600ae38cc92b79f54efd4556/63VysTbUpovnBU5VYp2j6.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/600ae38cc92b79f54efd4556/63VysTbUpovnBU5VYp2j6.png"/></a></p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/600ae38cc92b79f54efd4556/ZgB9bgF5mRqbFV2fqcFxa.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/600ae38cc92b79f54efd4556/ZgB9bgF5mRqbFV2fqcFxa.png"/></a></p>
<p><em>This content was written by Ghost 8B Beta based on draft content at the model's about page.</em></p>
|
The Rise of Agentic Data Generation | https://hf.co/blog/mlabonne/agentic-datagen | Conclusion |
Mixture of Agents Model (MAM): An AI-Driven Full-Stack Development Team | https://hf.co/blog/dnnsdunca/mam-model | References |
Is AI carbon footprint worrisome? | https://hf.co/blog/as-cle-bert/is-ai-carbon-footprint-worrisome | References |
Optimisation d'un système RAG pour la recherche sémantique | https://hf.co/blog/Woziii/rag-semantic-search-space-huggingface | 3. Intégration dans gradio. |
In-browser LLM app in pure Python: Gemini Nano + Gradio-Lite | https://hf.co/blog/whitphx/in-browser-llm-gemini-nano-gradio-lite | Further reading and references |
Introducing HelpingAI-15B: Emotionally Intelligent Conversational AI | https://hf.co/blog/Abhaykoul/introducing-helpingai-15b | Emotional Quotient (EQ) |
How to run Gemini Nano locally in your browser | https://hf.co/blog/Xenova/run-gemini-nano-in-your-browser | References: |
MMLU-Pro-NoMath | https://hf.co/blog/sam-paech/mmlu-pro-nomath | And also to the original MMLU which MMLU-Pro heavily draws from: |
RegMix: Data Mixture as Regression for Language Model Pre-training | https://hf.co/blog/SivilTaram/regmix | Try RegMix on your dataset |
MInference 1.0: 10x Faster Million Context Inference with a Single GPU | https://hf.co/blog/liyucheng/minference10 | View more information about MInference |
Enhancing Search Capabilities for Non-English Datasets in the Dataset Viewer | https://hf.co/blog/asoria/fts-dataset-viewer | Considerations |
Introducing the Polish ASR Leaderboard (PAL) and Benchmark Intended Grouping of Open Speech (BIGOS) Corpora | https://hf.co/blog/michaljunczyk/introducing-polish-asr-leaderboard | References |
Metric and Relative Monocular Depth Estimation: An Overview. Fine-Tuning Depth Anything V2 👐 📚 | https://hf.co/blog/Isayoften/monocular-depth-estimation-guide | References |
The Great LLM Showdown: Amy's Quest for the Perfect LLM | https://hf.co/blog/wolfram/the-great-llm-showdown | A Call for Improvement |
BM25 for Python: Achieving high performance while simplifying dependencies with *BM25S*⚡ | https://hf.co/blog/xhluca/bm25s | Does BM25S replace other libraries? |
arXiv实用技巧,如何让你的paper关注度变高? | https://hf.co/blog/JessyTsu1/arxiv-trick | 4. arXiv卡点提交 |
Swarm Neural Networks: Revolutionizing Function and API Call Execution | https://hf.co/blog/TuringsSolutions/swarmneuralnetworks | References |
_Repetita iuvant_: how to improve AI code generation | https://hf.co/blog/as-cle-bert/repetita-iuvant-how-to-improve-ai-code-generation | References |
RAG chatbot using llama3 | https://hf.co/blog/not-lain/rag-chatbot-using-llama3 | Dedication |
GPM: Generative Password Manager | https://hf.co/blog/apehex/gpm | Improvements |
ColPali: Efficient Document Retrieval with Vision Language Models 👀 | https://hf.co/blog/manu/colpali | Acknowledgments |
Advanced RAG: Fine-Tune Embeddings from HuggingFace for RAG | https://hf.co/blog/lucifertrj/finetune-embeddings | Co-Author: Shivaya Pandey |
Image-based search engine | https://hf.co/blog/not-lain/image-retriever | Acknowledgement |
EU Training Data Transparency: A Proposal for a Sufficiently Detailed Summary 📑📚🖼️🇪🇺 | https://hf.co/blog/yjernite/eu-data-template | Additional Resources |
Transformers | https://hf.co/blog/Esmail-AGumaan/attention-is-all-you-need | Citation: |
Systems of Representation Are All You Need | https://hf.co/blog/TuringsSolutions/systemsofrepresentation | EuclAId 750 Google Pro 1.0 |
A Guide to Designing New Functional Proteins and Improving Protein Function, Stability, and Diversity with Generative AI | https://hf.co/blog/AmelieSchreiber/protein-optimization-and-design | Concluding Remarks |
Building a Neural Network Classifier from the Ground Up: A Step-by-Step Guide | https://hf.co/blog/dcarpintero/building-a-neural-network-for-image-classification | References |
How I train a LoRA: m3lt style training overview | https://hf.co/blog/alvdansen/training-lora-m3lt | Final Observations |
Financial Analysis with Langchain and CrewAI Agents | https://hf.co/blog/herooooooooo/financial-analysis-with-langchain-and-crewai | CrewAI |
Train custom AI models with the trainer API and adapt them to 🤗 | https://hf.co/blog/not-lain/trainer-api-and-mixin-classes | Outro |
Formatting Datasets for Chat Template Compatibility | https://hf.co/blog/nroggendorff/format-mayo | Usage |
Part 2: Enhancing the Motoku LLM Retrieval System with OpenAI Embeddings and Prompt-based Retrieval | https://hf.co/blog/theeseus-ai/motoku-retrieval-2 | Conclusion |
Finetuning clip can be done locally with decent results (even if you are GPU poor). | https://hf.co/blog/herooooooooo/clip-finetune |
<p>
this is the journal of me following clipfinetune I have already posted this on medium but trying to slowly migrate my stuff here</p>
<pre><code class="language-python"><span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> datasets
<span class="hljs-keyword">from</span> dataclasses <span class="hljs-keyword">import</span> dataclass, field
<span class="hljs-keyword">from</span> typing <span class="hljs-keyword">import</span> <span class="hljs-type">Optional</span>
<span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt
<span class="hljs-keyword">import</span> requests
<span class="hljs-keyword">import</span> random
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
<span class="hljs-keyword">import</span> torch
<span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset
<span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-keyword">from</span> torchvision.io <span class="hljs-keyword">import</span> ImageReadMode, read_image
<span class="hljs-keyword">from</span> torchvision.transforms <span class="hljs-keyword">import</span> CenterCrop, ConvertImageDtype, Normalize, Resize
<span class="hljs-keyword">from</span> torchvision.transforms.functional <span class="hljs-keyword">import</span> InterpolationMode
<span class="hljs-keyword">from</span> pdb <span class="hljs-keyword">import</span> set_trace
<span class="hljs-keyword">import</span> transformers
<span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> (
VisionTextDualEncoderProcessor,
VisionTextDualEncoderModel,
AutoImageProcessor,
AutoModel,
AutoTokenizer,
HfArgumentParser,
Trainer,
TrainingArguments,
set_seed,
)
<span class="hljs-keyword">from</span> transformers.trainer_utils <span class="hljs-keyword">import</span> get_last_checkpoint
<span class="hljs-keyword">from</span> transformers.utils <span class="hljs-keyword">import</span> check_min_version, send_example_telemetry
<span class="hljs-keyword">from</span> transformers.utils.versions <span class="hljs-keyword">import</span> require_version
<span class="hljs-keyword">import</span> warnings
warnings.filterwarnings(<span class="hljs-string">'ignore'</span>, category=UserWarning, module=<span class="hljs-string">'torchvision'</span>)
<span class="hljs-comment"># Will error if the minimal version of Transformers is not installed. Remove at your own risks.</span>
check_min_version(<span class="hljs-string">"4.31.0.dev0"</span>)
require_version(<span class="hljs-string">"datasets>=1.8.0"</span>, <span class="hljs-string">"To fix: pip install -r examples/pytorch/contrastive-image-text/requirements.txt"</span>)
</code></pre>
<pre><code>c:\Users\kevol\anaconda3\envs\pytorch\lib\site-packages\tqdm\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
</code></pre>
<p>Image Encoder for our CLIP is clip-vit-base-patch32 and Text Encoder is roberta-base</p>
<pre><code class="language-python">model = VisionTextDualEncoderModel.from_vision_text_pretrained(
<span class="hljs-string">"openai/clip-vit-base-patch32"</span>, <span class="hljs-string">"roberta-base"</span>
)
tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"roberta-base"</span>)
image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"openai/clip-vit-base-patch32"</span>)
processor = VisionTextDualEncoderProcessor(image_processor, tokenizer)
model.save_pretrained(<span class="hljs-string">"clip-roberta"</span>)
processor.save_pretrained(<span class="hljs-string">"clip-roberta"</span>)
</code></pre>
<pre><code>config.json: 100%|██████████| 4.19k/4.19k [00:00<00:00, 1.18MB/s]
c:\Users\kevol\anaconda3\envs\pytorch\lib\site-packages\huggingface_hub\file_download.py:149: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\kevol\.cache\huggingface\hub\models--openai--clip-vit-base-patch32. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
warnings.warn(message)
pytorch_model.bin: 100%|██████████| 605M/605M [00:20<00:00, 29.5MB/s]
config.json: 100%|██████████| 481/481 [00:00<00:00, 41.7kB/s]
c:\Users\kevol\anaconda3\envs\pytorch\lib\site-packages\huggingface_hub\file_download.py:149: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\kevol\.cache\huggingface\hub\models--roberta-base. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
warnings.warn(message)
model.safetensors: 100%|██████████| 499M/499M [00:16<00:00, 30.0MB/s]
Some weights of RobertaModel were not initialized from the model checkpoint at roberta-base and are newly initialized: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
The projection layer and logit scale weights `['visual_projection.weight', 'text_projection.weight', 'logit_scale']` are newly initialized. You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
vocab.json: 100%|██████████| 899k/899k [00:00<00:00, 2.38MB/s]
merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 1.84MB/s]
tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 10.6MB/s]
preprocessor_config.json: 100%|██████████| 316/316 [00:00<00:00, 100kB/s]
Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.
</code></pre>
<p>We define two argument classes: ModelArguments and HfArgumentParser. This allows us to utilize Hugging Face’s HfArgumentParser in conjunction with the default TrainingArguments from Hugging Face.</p>
<pre><code class="language-python"><span class="hljs-meta">@dataclass</span>
<span class="hljs-keyword">class</span> <span class="hljs-title class_">ModelArguments</span>:
<span class="hljs-string">"""</span>
<span class="hljs-string"> Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch.</span>
<span class="hljs-string"> """</span>
model_name_or_path: <span class="hljs-built_in">str</span> = field(
metadata={<span class="hljs-string">"help"</span>: <span class="hljs-string">"Path to pretrained model or model identifier from huggingface.co/models"</span>},
)
config_name: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">str</span>] = field(
default=<span class="hljs-literal">None</span>, metadata={<span class="hljs-string">"help"</span>: <span class="hljs-string">"Pretrained config name or path if not the same as model_name"</span>}
)
tokenizer_name: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">str</span>] = field(
default=<span class="hljs-literal">None</span>, metadata={<span class="hljs-string">"help"</span>: <span class="hljs-string">"Pretrained tokenizer name or path if not the same as model_name"</span>}
)
image_processor_name: <span class="hljs-built_in">str</span> = field(default=<span class="hljs-literal">None</span>, metadata={<span class="hljs-string">"help"</span>: <span class="hljs-string">"Name or path of preprocessor config."</span>})
cache_dir: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">str</span>] = field(
default=<span class="hljs-literal">None</span>, metadata={<span class="hljs-string">"help"</span>: <span class="hljs-string">"Where do you want to store the pretrained models downloaded from s3"</span>}
)
model_revision: <span class="hljs-built_in">str</span> = field(
default=<span class="hljs-string">"main"</span>,
metadata={<span class="hljs-string">"help"</span>: <span class="hljs-string">"The specific model version to use (can be a branch name, tag name or commit id)."</span>},
)
use_fast_tokenizer: <span class="hljs-built_in">bool</span> = field(
default=<span class="hljs-literal">True</span>,
metadata={<span class="hljs-string">"help"</span>: <span class="hljs-string">"Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."</span>},
)
use_auth_token: <span class="hljs-built_in">bool</span> = field(
default=<span class="hljs-literal">False</span>,
metadata={
<span class="hljs-string">"help"</span>: (
<span class="hljs-string">"Will use the token generated when running `huggingface-cli login` (necessary to use this script "</span>
<span class="hljs-string">"with private models)."</span>
)
},
)
</code></pre>
<pre><code class="language-python"><span class="hljs-meta">@dataclass</span>
<span class="hljs-keyword">class</span> <span class="hljs-title class_">DataTrainingArguments</span>:
<span class="hljs-string">"""</span>
<span class="hljs-string"> Arguments pertaining to what data we are going to input our model for training and eval.</span>
<span class="hljs-string"> """</span>
dataset_name: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">str</span>] = field(
default=<span class="hljs-literal">None</span>, metadata={<span class="hljs-string">"help"</span>: <span class="hljs-string">"The name of the dataset to use (via the datasets library)."</span>}
)
data_dir: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">str</span>] = field(default=<span class="hljs-literal">None</span>, metadata={<span class="hljs-string">"help"</span>: <span class="hljs-string">"The data directory containing input files."</span>})
image_column: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">str</span>] = field(
default=<span class="hljs-string">"image_path"</span>,
metadata={<span class="hljs-string">"help"</span>: <span class="hljs-string">"The name of the column in the datasets containing the full image file paths."</span>},
)
caption_column: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">str</span>] = field(
default=<span class="hljs-string">"caption"</span>,
metadata={<span class="hljs-string">"help"</span>: <span class="hljs-string">"The name of the column in the datasets containing the image captions."</span>},
)
max_seq_length: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">int</span>] = field(
default=<span class="hljs-number">128</span>,
metadata={
<span class="hljs-string">"help"</span>: (
<span class="hljs-string">"The maximum total input sequence length after tokenization. Sequences longer "</span>
<span class="hljs-string">"than this will be truncated, sequences shorter will be padded."</span>
)
},
)
overwrite_cache: <span class="hljs-built_in">bool</span> = field(
default=<span class="hljs-literal">False</span>, metadata={<span class="hljs-string">"help"</span>: <span class="hljs-string">"Overwrite the cached training and evaluation sets"</span>}
)
preprocessing_num_workers: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">int</span>] = field(
default=<span class="hljs-literal">None</span>,
metadata={<span class="hljs-string">"help"</span>: <span class="hljs-string">"The number of processes to use for the preprocessing."</span>},
)
</code></pre>
<pre><code class="language-python">args_dict = {<span class="hljs-string">'output_dir'</span>: <span class="hljs-string">'./clip-roberta-finetuned'</span>,
<span class="hljs-string">'model_name_or_path'</span>: <span class="hljs-string">'./clip-roberta'</span>,
<span class="hljs-string">'data_dir'</span>: <span class="hljs-string">'./data'</span>,
<span class="hljs-string">'dataset_name'</span>: <span class="hljs-string">'arampacha/rsicd'</span>,
<span class="hljs-string">'image_column'</span>: <span class="hljs-string">'image'</span>,
<span class="hljs-string">'caption_column'</span>: <span class="hljs-string">'captions'</span>,
<span class="hljs-string">'remove_unused_columns'</span>: <span class="hljs-literal">False</span>,
<span class="hljs-string">'per_device_train_batch_size'</span>: <span class="hljs-number">64</span>,
<span class="hljs-string">'per_device_eval_batch_size'</span>: <span class="hljs-number">64</span>,
<span class="hljs-string">'learning_rate'</span>: <span class="hljs-number">5e-05</span>,
<span class="hljs-string">'warmup_steps'</span>: <span class="hljs-number">0</span>,
<span class="hljs-string">'weight_decay'</span>: <span class="hljs-number">0.1</span>,
<span class="hljs-string">'overwrite_output_dir'</span>: <span class="hljs-literal">True</span>,
<span class="hljs-string">'push_to_hub'</span>: <span class="hljs-literal">False</span>}
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_dict(args_dict)
</code></pre>
<pre><code class="language-python"><span class="hljs-comment"># Dataset</span>
<span class="hljs-keyword">class</span> <span class="hljs-title class_">Transform</span>(torch.nn.Module):
<span class="hljs-keyword">def</span> <span class="hljs-title function_">__init__</span>(<span class="hljs-params">self, image_size, mean, std</span>):
<span class="hljs-built_in">super</span>().__init__()
self.transforms = torch.nn.Sequential(
Resize([image_size], interpolation=InterpolationMode.BICUBIC, antialias=<span class="hljs-literal">True</span>),
CenterCrop(image_size),
ConvertImageDtype(torch.<span class="hljs-built_in">float</span>),
Normalize(mean, std),
)
<span class="hljs-keyword">def</span> <span class="hljs-title function_">forward</span>(<span class="hljs-params">self, x</span>) -> torch.Tensor:
<span class="hljs-string">"""`x` should be an instance of `PIL.Image.Image`"""</span>
<span class="hljs-keyword">with</span> torch.no_grad():
x = self.transforms(x)
<span class="hljs-keyword">return</span> x
</code></pre>
<pre><code class="language-python"><span class="hljs-keyword">def</span> <span class="hljs-title function_">collate_fn</span>(<span class="hljs-params">examples</span>):
pixel_values = torch.stack([example[<span class="hljs-string">"pixel_values"</span>] <span class="hljs-keyword">for</span> example <span class="hljs-keyword">in</span> examples])
input_ids = torch.tensor([example[<span class="hljs-string">"input_ids"</span>] <span class="hljs-keyword">for</span> example <span class="hljs-keyword">in</span> examples], dtype=torch.long)
attention_mask = torch.tensor([example[<span class="hljs-string">"attention_mask"</span>] <span class="hljs-keyword">for</span> example <span class="hljs-keyword">in</span> examples], dtype=torch.long)
<span class="hljs-keyword">return</span> {
<span class="hljs-string">"pixel_values"</span>: pixel_values,
<span class="hljs-string">"input_ids"</span>: input_ids,
<span class="hljs-string">"attention_mask"</span>: attention_mask,
<span class="hljs-string">"return_loss"</span>: <span class="hljs-literal">True</span>,
}
</code></pre>
<pre><code class="language-python">dataset = datasets.load_dataset(<span class="hljs-string">"arampacha/rsicd"</span>)
</code></pre>
<pre><code>Downloading metadata: 100%|██████████| 1.03k/1.03k [00:00<?, ?B/s]
Downloading data: 100%|██████████| 419M/419M [01:28<00:00, 4.73MB/s]
Downloading data: 100%|██████████| 55.1M/55.1M [00:10<00:00, 5.43MB/s]
Downloading data: 100%|██████████| 51.6M/51.6M [00:09<00:00, 5.18MB/s]
Downloading data files: 100%|██████████| 3/3 [01:48<00:00, 36.25s/it]
Extracting data files: 100%|██████████| 3/3 [00:00<00:00, 117.53it/s]
Generating train split: 100%|██████████| 8734/8734 [00:02<00:00, 3624.40 examples/s]
Generating test split: 100%|██████████| 1093/1093 [00:00<00:00, 3410.02 examples/s]
Generating valid split: 100%|██████████| 1094/1094 [00:00<00:00, 4414.00 examples/s]
</code></pre>
<pre><code class="language-python">dataset
</code></pre>
<pre><code>DatasetDict({
train: Dataset({
features: ['filename', 'captions', 'image'],
num_rows: 8734
})
test: Dataset({
features: ['filename', 'captions', 'image'],
num_rows: 1093
})
valid: Dataset({
features: ['filename', 'captions', 'image'],
num_rows: 1094
})
})
</code></pre>
<pre><code class="language-python"><span class="hljs-comment"># Seeing an example</span>
<span class="hljs-keyword">def</span> <span class="hljs-title function_">show_images</span>(<span class="hljs-params">dset, num_images=<span class="hljs-number">8</span>, without_caption=<span class="hljs-literal">True</span>,num_columns=<span class="hljs-number">2</span>,img_size=(<span class="hljs-params"><span class="hljs-number">4</span>, <span class="hljs-number">4</span></span>)</span>):
num_rows = -(-num_images // num_columns) <span class="hljs-comment"># Ceiling division</span>
fig = plt.figure(figsize=(img_size[<span class="hljs-number">0</span>] * num_columns, img_size[<span class="hljs-number">1</span>] * num_rows))
_<span class="hljs-built_in">list</span> = <span class="hljs-built_in">list</span>(<span class="hljs-built_in">range</span>(<span class="hljs-built_in">len</span>(dset)))
<span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(num_images):
index = _<span class="hljs-built_in">list</span>[i]
ax = fig.add_subplot(num_rows, num_columns, i+<span class="hljs-number">1</span>)
image = dset[index][<span class="hljs-string">'image'</span>]
plt.imshow(image)
<span class="hljs-comment"># Set title as the first caption</span>
<span class="hljs-keyword">if</span> without_caption:
caption = dset[index][<span class="hljs-string">'captions'</span>][<span class="hljs-number">0</span>]
ax.set_title(caption, fontsize=<span class="hljs-number">10</span>)
<span class="hljs-comment"># Remove axis</span>
plt.axis(<span class="hljs-string">'off'</span>)
plt.tight_layout()
plt.subplots_adjust(wspace=<span class="hljs-number">0.5</span>, hspace=<span class="hljs-number">0.01</span>) <span class="hljs-comment"># Adjust these values as needed</span>
plt.show()
</code></pre>
<pre><code class="language-python">show_images(dataset[<span class="hljs-string">'train'</span>], num_images=<span class="hljs-number">4</span>, without_caption=<span class="hljs-literal">True</span>)
</code></pre>
<p><a href="https://raw.githubusercontent.com/KevorkSulahian/Non-Quant-DL/main/clip_finetune_files/clip_finetune_13_0.png" rel="nofollow"><img alt="png" src="https://raw.githubusercontent.com/KevorkSulahian/Non-Quant-DL/main/clip_finetune_files/clip_finetune_13_0.png"/></a></p>
<pre><code class="language-python">tokenizer = AutoTokenizer.from_pretrained(
model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer
)
</code></pre>
<pre><code class="language-python">image_processor = AutoImageProcessor.from_pretrained(
model_args.image_processor_name <span class="hljs-keyword">or</span> model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=<span class="hljs-literal">True</span> <span class="hljs-keyword">if</span> model_args.use_auth_token <span class="hljs-keyword">else</span> <span class="hljs-literal">None</span>,
)
model = AutoModel.from_pretrained(
model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=<span class="hljs-literal">True</span> <span class="hljs-keyword">if</span> model_args.use_auth_token <span class="hljs-keyword">else</span> <span class="hljs-literal">None</span>,
)
config = model.config
</code></pre>
<pre><code class="language-python">set_seed(training_args.seed)
</code></pre>
<pre><code class="language-python">image_transformations = Transform(
config.vision_config.image_size, image_processor.image_mean, image_processor.image_std
)
image_transformations = torch.jit.script(image_transformations)
</code></pre>
<pre><code class="language-python"><span class="hljs-keyword">def</span> <span class="hljs-title function_">tokenize_captions</span>(<span class="hljs-params">examples</span>):
captions = [example[<span class="hljs-number">0</span>] <span class="hljs-keyword">for</span> example <span class="hljs-keyword">in</span> examples[data_args.caption_column]]
text_inputs = tokenizer(captions, max_length=data_args.max_seq_length, padding=<span class="hljs-string">"max_length"</span>, truncation=<span class="hljs-literal">True</span>)
examples[<span class="hljs-string">"input_ids"</span>] = text_inputs.input_ids
examples[<span class="hljs-string">"attention_mask"</span>] = text_inputs.attention_mask
<span class="hljs-keyword">return</span> examples
<span class="hljs-keyword">def</span> <span class="hljs-title function_">transform_images</span>(<span class="hljs-params">examples</span>):
images = [torch.tensor(np.array(image)).permute(<span class="hljs-number">2</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>) <span class="hljs-keyword">for</span> image <span class="hljs-keyword">in</span> examples[data_args.image_column]]
examples[<span class="hljs-string">"pixel_values"</span>] = [image_transformations(image) <span class="hljs-keyword">for</span> image <span class="hljs-keyword">in</span> images]
<span class="hljs-keyword">return</span> examples
<span class="hljs-keyword">def</span> <span class="hljs-title function_">filter_corrupt_images</span>(<span class="hljs-params">examples</span>):
<span class="hljs-string">"""remove problematic images"""</span>
valid_images = []
<span class="hljs-keyword">for</span> image_file <span class="hljs-keyword">in</span> examples[data_args.image_column]:
<span class="hljs-keyword">try</span>:
Image.<span class="hljs-built_in">open</span>(image_file)
valid_images.append(<span class="hljs-literal">True</span>)
<span class="hljs-keyword">except</span> Exception:
valid_images.append(<span class="hljs-literal">False</span>)
<span class="hljs-keyword">return</span> valid_images
</code></pre>
<pre><code class="language-python">train_dataset = dataset[<span class="hljs-string">"train"</span>]
train_dataset = train_dataset.<span class="hljs-built_in">map</span>(
function=tokenize_captions,
batched=<span class="hljs-literal">True</span>,
num_proc=data_args.preprocessing_num_workers,
load_from_cache_file=<span class="hljs-keyword">not</span> data_args.overwrite_cache,
desc=<span class="hljs-string">"Running tokenizer on train dataset"</span>,
)
train_dataset.set_transform(transform_images)
</code></pre>
<pre><code>Running tokenizer on train dataset: 100%|██████████| 8734/8734 [00:01<00:00, 6126.48 examples/s]
Parameter 'transform'=<function transform_images at 0x0000019A476F09D0> of the transform datasets.arrow_dataset.Dataset.set_format couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
</code></pre>
<pre><code class="language-python">train_dataset
</code></pre>
<pre><code>Dataset({
features: ['filename', 'captions', 'image', 'input_ids', 'attention_mask'],
num_rows: 8734
})
</code></pre>
<pre><code class="language-python">eval_dataset = dataset[<span class="hljs-string">"valid"</span>]
eval_dataset = eval_dataset.<span class="hljs-built_in">map</span>(
function=tokenize_captions,
batched=<span class="hljs-literal">True</span>,
num_proc=data_args.preprocessing_num_workers,
load_from_cache_file=<span class="hljs-keyword">not</span> data_args.overwrite_cache,
desc=<span class="hljs-string">"Running tokenizer on validation dataset"</span>,
)
eval_dataset.set_transform(transform_images)
</code></pre>
<pre><code>Running tokenizer on validation dataset: 100%|██████████| 1094/1094 [00:00<00:00, 3151.84 examples/s]
</code></pre>
<pre><code class="language-python">train_dataset, eval_dataset
</code></pre>
<pre><code>(Dataset({
features: ['filename', 'captions', 'image', 'input_ids', 'attention_mask'],
num_rows: 8734
}),
Dataset({
features: ['filename', 'captions', 'image', 'input_ids', 'attention_mask'],
num_rows: 1094
}))
</code></pre>
<pre><code class="language-python">processor = VisionTextDualEncoderProcessor(image_processor, tokenizer)
</code></pre>
<pre><code class="language-python"><span class="hljs-comment"># example from hugging face</span>
urls = [
<span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span>,
<span class="hljs-string">"https://farm3.staticflickr.com/2674/5850229113_4fe05d5265_z.jpg"</span>,
]
images = [Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-keyword">for</span> url <span class="hljs-keyword">in</span> urls]
inputs = processor(
text=[<span class="hljs-string">"a photo of a cat"</span>], images=images, return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span>
)
inputs[<span class="hljs-string">'input_ids'</span>] = inputs[<span class="hljs-string">'input_ids'</span>].cuda()
inputs[<span class="hljs-string">'attention_mask'</span>] = inputs[<span class="hljs-string">'attention_mask'</span>].cuda()
inputs[<span class="hljs-string">'pixel_values'</span>] = inputs[<span class="hljs-string">'pixel_values'</span>].cuda()
model = model.cuda()
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
</code></pre>
<pre><code class="language-python">logits_per_image
</code></pre>
<pre><code>tensor([[-0.4522],
[ 0.7341]], device='cuda:0', grad_fn=<PermuteBackward0>)
</code></pre>
<pre><code class="language-python">images[<span class="hljs-number">0</span>]
</code></pre>
<p><a href="https://raw.githubusercontent.com/KevorkSulahian/Non-Quant-DL/main/clip_finetune_files/clip_finetune_26_0.png" rel="nofollow"><img alt="png" src="https://raw.githubusercontent.com/KevorkSulahian/Non-Quant-DL/main/clip_finetune_files/clip_finetune_26_0.png"/></a></p>
<pre><code class="language-python">images[<span class="hljs-number">1</span>]
</code></pre>
<p><a href="https://raw.githubusercontent.com/KevorkSulahian/Non-Quant-DL/main/clip_finetune_files/clip_finetune_27_0.png" rel="nofollow"><img alt="png" src="https://raw.githubusercontent.com/KevorkSulahian/Non-Quant-DL/main/clip_finetune_files/clip_finetune_27_0.png"/></a></p>
<pre><code class="language-python"><span class="hljs-comment"># base model prediction</span>
np.random.seed(<span class="hljs-number">0</span>)
indices = np.random.choice(<span class="hljs-built_in">len</span>(dataset[<span class="hljs-string">'valid'</span>]), <span class="hljs-number">8</span>, replace=<span class="hljs-literal">False</span>)
patches = dataset[<span class="hljs-string">'valid'</span>].select(indices.tolist())
</code></pre>
<pre><code class="language-python">show_images(patches, <span class="hljs-number">8</span>, without_caption=<span class="hljs-literal">False</span>, num_columns=<span class="hljs-number">4</span>,img_size=(<span class="hljs-number">3</span>, <span class="hljs-number">3</span>))
</code></pre>
<p><a href="https://raw.githubusercontent.com/KevorkSulahian/Non-Quant-DL/main/clip_finetune_files/clip_finetune_29_0.png" rel="nofollow"><img alt="png" src="https://raw.githubusercontent.com/KevorkSulahian/Non-Quant-DL/main/clip_finetune_files/clip_finetune_29_0.png"/></a></p>
<pre><code class="language-python"><span class="hljs-keyword">def</span> <span class="hljs-title function_">show_result</span>(<span class="hljs-params">model, patches, text, top_n = <span class="hljs-number">3</span></span>):
images = [patch[<span class="hljs-string">'image'</span>] <span class="hljs-keyword">for</span> patch <span class="hljs-keyword">in</span> patches]
inputs = processor(text=[text], images=images, return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span>)
inputs[<span class="hljs-string">'input_ids'</span>] = inputs[<span class="hljs-string">'input_ids'</span>].cuda()
inputs[<span class="hljs-string">'attention_mask'</span>] = inputs[<span class="hljs-string">'attention_mask'</span>].cuda()
inputs[<span class="hljs-string">'pixel_values'</span>] = inputs[<span class="hljs-string">'pixel_values'</span>].cuda()
model = model.cuda()
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
sorted_idx = (torch.sort(logits_per_image, dim=<span class="hljs-number">0</span>, descending=<span class="hljs-literal">True</span>)[<span class="hljs-number">1</span>][:,<span class="hljs-number">0</span>]).tolist()
sorted_idx = sorted_idx[:top_n]
patches_sorted = patches.select(sorted_idx)
show_images(patches_sorted, num_images=<span class="hljs-built_in">len</span>(patches_sorted), without_caption=<span class="hljs-literal">False</span>, num_columns=<span class="hljs-number">1</span>, img_size=(<span class="hljs-number">3</span>,<span class="hljs-number">3</span>))
</code></pre>
<pre><code class="language-python">show_result(model, patches, <span class="hljs-string">'more than one plane'</span>)
</code></pre>
<p><a href="https://raw.githubusercontent.com/KevorkSulahian/Non-Quant-DL/main/clip_finetune_files/clip_finetune_31_0.png" rel="nofollow"><img alt="png" src="https://raw.githubusercontent.com/KevorkSulahian/Non-Quant-DL/main/clip_finetune_files/clip_finetune_31_0.png"/></a></p>
<p>This was my first time training using HF trainer, feels a little overwhelming but I feel like If i do more and more after I get a good gpu, it will be easier than normal finetuning</p>
<pre><code class="language-python"><span class="hljs-comment"># 8. Initalize our trainer</span>
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=collate_fn,
)
<span class="hljs-comment"># 9. Training</span>
train_result = trainer.train()
trainer.log_metrics(<span class="hljs-string">"train"</span>, train_result.metrics)
metrics = trainer.evaluate()
trainer.log_metrics(<span class="hljs-string">"eval"</span>, metrics)
</code></pre>
<pre><code> 0%| | 0/411 [00:00<?, ?it/s]c:\Users\kevol\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py:1501: UserWarning: operator () profile_node %376 : int = prim::profile_ivalue(%out_dtype.1)
does not have profile information (Triggered internally at C:\cb\pytorch_1000000000000\work\third_party\nvfuser\csrc\graph_fuser.cpp:108.)
return forward_call(*args, **kwargs)
100%|██████████| 411/411 [1:19:45<00:00, 11.64s/it]
{'train_runtime': 4785.2548, 'train_samples_per_second': 5.476, 'train_steps_per_second': 0.086, 'train_loss': 1.8650288454227495, 'epoch': 3.0}
***** train metrics *****
epoch = 3.0
train_loss = 1.865
train_runtime = 1:19:45.25
train_samples_per_second = 5.476
train_steps_per_second = 0.086
100%|██████████| 18/18 [00:20<00:00, 1.16s/it]
***** eval metrics *****
epoch = 3.0
eval_loss = 3.8813
eval_runtime = 0:00:22.34
eval_samples_per_second = 48.959
eval_steps_per_second = 0.806
</code></pre>
<pre><code class="language-python">show_result(model, patches, <span class="hljs-string">'green trees'</span>)
</code></pre>
<p><a href="https://raw.githubusercontent.com/KevorkSulahian/Non-Quant-DL/main/clip_finetune_files/clip_finetune_34_0.png" rel="nofollow"><img alt="png" src="https://raw.githubusercontent.com/KevorkSulahian/Non-Quant-DL/main/clip_finetune_files/clip_finetune_34_0.png"/></a></p>
<pre><code class="language-python">show_result(model, patches, <span class="hljs-string">'airplane'</span>)
</code></pre>
<p><a href="https://raw.githubusercontent.com/KevorkSulahian/Non-Quant-DL/main/clip_finetune_files/clip_finetune_35_0.png" rel="nofollow"><img alt="png" src="https://raw.githubusercontent.com/KevorkSulahian/Non-Quant-DL/main/clip_finetune_files/clip_finetune_35_0.png"/></a></p>
<p>Honestly not the best but I also didn't train for long, the result or numbers aren't the important part here but rather I wanted to focus on how I can finetune</p>
|
Building a Motoku LLM Retrieval System Using Internet Computer Protocol, Motoko, and Node.js | https://hf.co/blog/theeseus-ai/icp-retrieval-system | Conclusion |
Building an AI-Powered Card Counter with TensorFlow | https://hf.co/blog/theeseus-ai/card-counting | Conclusion |
Tokenization Is A Dead Weight | https://hf.co/blog/apehex/tokenization-is-a-dead-weight | Resources |
Evaluate RAG pipeline using HuggingFace Open Source Models | https://hf.co/blog/lucifertrj/evaluate-rag | Try BeyondLLM |
Build Agentic Workflow using OpenAGI and HuggingFace models | https://hf.co/blog/lucifertrj/openagi-blog | Join the Community |
MotionLCM: The Fastest and Best Motion Generation Model | https://hf.co/blog/EvanTHU/motionlcm | 📜 Citation |
💃Introducing the first LLM-based Motion understanding model: MotionLLM | https://hf.co/blog/EvanTHU/motionllm | 📜 Citation |
🚨 ALERT: A Comprehensive Benchmark for Assessing Large Language Models' Safety through Red Teaming | https://hf.co/blog/sted97/alert | Further Resources |
𝗝𝘂𝗱𝗴𝗶𝗻𝗴 𝘁𝗵𝗲 𝗝𝘂𝗱𝗴𝗲𝘀: 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗻𝗴 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝗮𝗻𝗱 𝗩𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 𝗶𝗻 𝗟𝗟𝗠𝘀-𝗮𝘀-𝗝𝘂𝗱𝗴𝗲𝘀 | https://hf.co/blog/singh96aman/judgingthejudges |
<p>
𝐂𝐚𝐧 𝐋𝐋𝐌𝐬 𝐬𝐞𝐫𝐯𝐞 𝐚𝐬 𝐫𝐞𝐥𝐢𝐚𝐛𝐥𝐞 𝐣𝐮𝐝𝐠𝐞𝐬 ⚖️?</p>
<p>We aim to identify the right metrics for evaluating Judge LLMs and understand their sensitivities to prompt guidelines, engineering, and specificity. With this paper, we want to raise caution ⚠️ to blindly using LLMs as human proxy. </p>
<p>Aman Singh Thakur, Kartik Choudhary, Venkat Srinik Ramayapally, Sankaran Vaidyanathan, Dieuwke Hupkes</p>
<p>Arxiv link - <a href="https://arxiv.org/abs/2406.12624" rel="nofollow">https://arxiv.org/abs/2406.12624</a></p>
<p>Tweet Summary - <a href="https://x.com/iamsingh96aman/status/1804148173008703509" rel="nofollow">https://x.com/iamsingh96aman/status/1804148173008703509</a></p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63b491f3103617b0a5af6b4b/mB70GEv0weL-hxyRRFprz.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63b491f3103617b0a5af6b4b/mB70GEv0weL-hxyRRFprz.png"/></a></p>
<p>Key findings -</p>
<p>🌟 𝗧𝗼𝗽 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗲𝗿𝘀: Only 𝗚𝗣𝗧-𝟰 and 𝗟𝗟𝗮𝗺𝗮-𝟯 𝟳𝟬𝗕 shine among 9 judge models. However, they still fall short of inter-human annotator agreement.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63b491f3103617b0a5af6b4b/usY0q6d_iY0NnHIR0dmxd.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63b491f3103617b0a5af6b4b/usY0q6d_iY0NnHIR0dmxd.png"/></a></p>
<p>📊 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗠𝗲𝘁𝗿𝗶𝗰: Scores assigned by judges with 80%+ percent alignment with humans can be 20 points apart! Cohen's kappa is a superior metric. </p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63b491f3103617b0a5af6b4b/i7CWzDBKfnwa8i5e09BaN.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63b491f3103617b0a5af6b4b/i7CWzDBKfnwa8i5e09BaN.png"/></a></p>
<p>⚖️ 𝗥𝗮𝗻𝗸𝗶𝗻𝗴 𝘃𝘀 𝘀𝗰𝗼𝗿𝗶𝗻𝗴: Most aligned in scores != most discriminative, in some cases, judge models with low alignment such as Contains (lexical match), and JudgeLM-7B outperform better models in terms of 𝑟𝑎𝑛𝑘𝑖𝑛𝑔 models, because their biases are more systematic.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63b491f3103617b0a5af6b4b/dllGt9sZYrFdzRiHz195D.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63b491f3103617b0a5af6b4b/dllGt9sZYrFdzRiHz195D.png"/></a></p>
<p>🧩 𝗟𝗲𝗻𝗶𝗲𝗻𝗰𝘆: Judge LLMs tend to be more lenient than strict.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63b491f3103617b0a5af6b4b/LwwQDmo3YEXC_YWkiCTW0.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63b491f3103617b0a5af6b4b/LwwQDmo3YEXC_YWkiCTW0.png"/></a></p>
<p>🎭 𝗩𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Judge LLMs can be easily tricked by controlled responses like "Yes," "Sure," and "I don't know."</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63b491f3103617b0a5af6b4b/OPBJJlEzGMFfYDlFYN7rP.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63b491f3103617b0a5af6b4b/OPBJJlEzGMFfYDlFYN7rP.png"/></a></p>
<p>🎯 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆: It's not easy to steer large models while smaller models get confused by adding too much detail. </p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63b491f3103617b0a5af6b4b/IDs2wjCsSXQ7dD9VJ_9rn.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63b491f3103617b0a5af6b4b/IDs2wjCsSXQ7dD9VJ_9rn.png"/></a></p>
|
Claude-3.5 Evaluation Results on Open VLM Leaderboard | https://hf.co/blog/KennyUTC/claude3-5 |
<p>
<a href="https://cdn-uploads.huggingface.co/production/uploads/63ee1379190ddd6214efd73a/-jf9u3KKGt0pYLD2wYFZs.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63ee1379190ddd6214efd73a/-jf9u3KKGt0pYLD2wYFZs.png"/></a></p>
<p>Claude3.5-Sonnet is the latest large multi-modal model released by Anthropic, and it is the first version of the Claude 3.5 series. According to official blog, this model surpasses its predecessor such as Claude3-Opus and Gemini-1.5-Pro in terms of multi-modal understanding. To verify this, we tested Claude3.5-Sonnet on eight objective image-text multimodal evaluation benchmarks in the Open VLM Leaderboard.</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th align="center">Dataset \ Model</th>
<th align="center">GPT-4o-20240513</th>
<th align="center">Claude3.5-Sonnet</th>
<th align="center">Gemini-1.5-Pro</th>
<th align="center">GPT-4v-20240409</th>
<th align="center">Claude3-Opus</th>
</tr>
</thead><tbody><tr>
<td align="center">Overall Rank</td>
<td align="center"><strong>1</strong></td>
<td align="center"><em>2</em></td>
<td align="center">3</td>
<td align="center">4</td>
<td align="center">16</td>
</tr>
<tr>
<td align="center">Avg. Score</td>
<td align="center"><strong>69.9</strong></td>
<td align="center"><em>67.9</em></td>
<td align="center">64.4</td>
<td align="center">63.5</td>
<td align="center">54.4</td>
</tr>
<tr>
<td align="center">MMBench v1.1</td>
<td align="center"><strong>82.2</strong></td>
<td align="center">78.5</td>
<td align="center">73.9</td>
<td align="center"><em>79.8</em></td>
<td align="center">59.1</td>
</tr>
<tr>
<td align="center">MMStar</td>
<td align="center"><strong>63.9</strong></td>
<td align="center"><em>62.2</em></td>
<td align="center">59.1</td>
<td align="center">56.0</td>
<td align="center">45.7</td>
</tr>
<tr>
<td align="center">MMMU_VAL</td>
<td align="center"><strong>69.2</strong></td>
<td align="center"><em>65.9</em></td>
<td align="center">60.6</td>
<td align="center">61.7</td>
<td align="center">54.9</td>
</tr>
<tr>
<td align="center">MathVista_MINI</td>
<td align="center"><em>61.3</em></td>
<td align="center"><strong>61.6</strong></td>
<td align="center">57.7</td>
<td align="center">54.7</td>
<td align="center">45.8</td>
</tr>
<tr>
<td align="center">HallusionBench Avg.</td>
<td align="center"><strong>55.0</strong></td>
<td align="center"><em>49.9</em></td>
<td align="center">45.6</td>
<td align="center">43.9</td>
<td align="center">37.8</td>
</tr>
<tr>
<td align="center">AI2D_TEST</td>
<td align="center"><strong>84.6</strong></td>
<td align="center"><em>80.2</em></td>
<td align="center">79.1</td>
<td align="center">78.6</td>
<td align="center">70.6</td>
</tr>
<tr>
<td align="center">OCRBench</td>
<td align="center">736</td>
<td align="center"><strong>788</strong></td>
<td align="center"><em>754</em></td>
<td align="center">656</td>
<td align="center">694</td>
</tr>
<tr>
<td align="center">MMVet</td>
<td align="center"><strong>69.1</strong></td>
<td align="center">66</td>
<td align="center">64</td>
<td align="center"><em>67.5</em></td>
<td align="center">51.7</td>
</tr>
</tbody>
</table>
</div>
<p>The evaluation results show that the objective performance of Claude3.5-Sonnet has greatly improved compared to Claude3-Opus, with the average score over all benchmarks improved more than 10%, and its overall ranking has risen from 16th to 2nd. Specifically, Claude3.5 ranked in the top two in six out of the eight benchmarks, and achieved the best results in multimodal mathematics and optical characters recognition.</p>
<p><strong>Potential issues</strong>: API models such as GPT-4o and Claude3.5-Sonnet are released with officially reported performance on several multimodal evaluation benchmarks. Since they have not made the test scripts public, we failed to reproduce some of the accuracies reported by the officials (such as AI2D). If you can reproduce significantly higher accuracy on some benchmarks, please contact us for updates: <a href="mailto:[email protected]" rel="nofollow">[email protected]</a>. </p>
<p>For more detailed performance, please refer to the <a href="https://huggingface.co/spaces/opencompass/open_vlm_leaderboard">Open VLM Leaderboard</a>.</p>
|
seemore: Implement a Vision Language Model from Scratch | https://hf.co/blog/AviSoori1x/seemore-vision-language-model | Bringing everything together to implement Seemore: the simple Vision Language Model |
SeeMoE: Implementing a MoE Vision Language Model from Scratch | https://hf.co/blog/AviSoori1x/seemoe |
<p>
TL;DR: In this blog I implement a mixture of experts vision language model consisting of an image encoder, a multimodal projection module and a mixture of experts decoder language model in pure pytorch. Thus, the resulting implementation could be thought of as a scaled down version of Grok 1.5 Vision and GPT-4 Vision (both have vision encoders connected to a MoE Decoder model via a projection module). The name ‘seeMoE’ is my way of paying homage to Andrej Karpathy’s project ‘makemore’ because for the decoder used here I implement a character level autoregressive language model much like in his nanoGPT/ makemore implementation but with a twist. The twist being that it's a mixture of experts Decoder (much like DBRX, Mixtral and Grok). My goal is for you to have an intuitive understanding of how this seemingly state of the art implementation works so that you can improve upon it or use the key takeaways to build more useful systems.</p>
<p>The entire implementation can be found in seeMoE_from_Scratch.ipynb in the following repo: <a href="https://github.com/AviSoori1x/seemore" rel="nofollow">https://github.com/AviSoori1x/seemore</a></p>
<div align="center">
<img alt="seemore" height="500" src="https://github.com/AviSoori1x/seemore/blob/main/images/seeMoE.png?raw=true" width="500"/>
</div>
<p>If you've read my other blogs on implementing mixture of experts LLMs from scratch: <a href="https://huggingface.co/blog/AviSoori1x/makemoe-from-scratch">https://huggingface.co/blog/AviSoori1x/makemoe-from-scratch</a> and implementing a vision language model from scratch: <a href="https://huggingface.co/blog/AviSoori1x/seemore-vision-language-model">https://huggingface.co/blog/AviSoori1x/seemore-vision-language-model</a>, you'll realize that I'm combining the two to implement seeMoE. Essentially, all I'm doing here is replacing the feed-forward neural network in each transformer block of the decoder with a mixture of experts module with noisy Top-K gating. More information on how this is implemented, is given here: <a href="https://huggingface.co/blog/AviSoori1x/makemoe-from-scratch">https://huggingface.co/blog/AviSoori1x/makemoe-from-scratch</a>. I strongly encourage you to read these two blogs and carefully go through the repos linked to both the blogs before diving into this.</p>
<p>In ‘seeMoE’, my simple implementation of a Mixture of Experts vision language model (VLM), there are 3 main components.</p>
<div align="center">
<img alt="seemore" height="500" src="https://github.com/AviSoori1x/seemore/blob/main/images/moevlm.png?raw=true" width="500"/>
</div>
<ul>
<li><p>Image Encoder to extract visual features from images. In this case I use a from scratch implementation of the original vision transformer used in CLIP. This is actually a popular choice in many modern VLMs. The one notable exception is Fuyu series of models from Adept, that passes the patchified images directly to the projection layer.</p>
</li>
<li><p>Vision-Language Projector - Image embeddings are not of the same shape as text embeddings used by the decoder. So we need to ‘project’ i.e. change dimensionality of image features extracted by the image encoder to match what’s observed in the text embedding space. So image features become ‘visual tokens’ for the decoder. This could be a single layer or an MLP. I’ve used an MLP because it’s worth showing.</p>
</li>
<li><p>A decoder only language model with the mixture of experts architecture. This is the component that ultimately generates text. In my implementation I’ve deviated from what you see in LLaVA a bit by incorporating the projection module to my decoder. Typically this is not observed, and you leave the architecture of the decoder (which is usually an already pretrained model) untouched. The biggest change here is that, as mentioned before, the feed forward neural net / MLP in each transformer block is replaced by a mixture of experts block with a noisy top-k gating mechanism. Basically each token (text tokens + visual tokens that have been mapped to the same embedding space as text tokens) are only processed by top-k of the n experts in each transformer block. So if it's a MoE architecture with 8 experts and top 2 gating, only 2 of the experts will be activated.</p>
</li>
</ul>
<p>Since the Image Encoder and Vision Language Projector are unchanged from seemore (Linked above. Repo here: <a href="https://github.com/AviSoori1x/seemore" rel="nofollow">https://github.com/AviSoori1x/seemore</a>), I encourage you to read the blog/ go through the notebooks for details on those. </p>
<p>Now let's revisit the components of a sparse mixture of experts module: </p>
<ol>
<li>Experts - just n vanilla MLPs</li>
<li>A gating/ routing mechanism</li>
<li>weighted summation of activated experts based on the routing mechanism</li>
</ol>
<img alt="seemore" height="700" src="https://raw.githubusercontent.com/AviSoori1x/makeMoE/main/images/experts.png" width="700"/>
<p>First, the 'Expert' which is just an MLP like we saw earlier when implementing the Encoder. </p>
<pre><code class="language-python"><span class="hljs-comment">#Expert module</span>
<span class="hljs-keyword">class</span> <span class="hljs-title class_">Expert</span>(nn.Module):
<span class="hljs-string">""" An MLP is a simple linear layer followed by a non-linearity i.e. each Expert """</span>
<span class="hljs-keyword">def</span> <span class="hljs-title function_">__init__</span>(<span class="hljs-params">self, n_embed</span>):
<span class="hljs-built_in">super</span>().__init__()
self.net = nn.Sequential(
nn.Linear(n_embed, <span class="hljs-number">4</span> * n_embed),
nn.ReLU(),
nn.Linear(<span class="hljs-number">4</span> * n_embed, n_embed),
nn.Dropout(dropout),
)
<span class="hljs-keyword">def</span> <span class="hljs-title function_">forward</span>(<span class="hljs-params">self, x</span>):
<span class="hljs-keyword">return</span> self.net(x)
</code></pre>
<p>The routing module decides which experts will be activated. Noisy top k gating/ routing adds a bit of gaussian noise to ensure that there's a fine balance between exploration and exploitation in picking the top-k experts for each token. This reduces the odds of the same n experts getting picked everytime, which defeats the purpose of having a larger parameter count with sparse activation for better generalizability.</p>
<img alt="seemore" height="700" src="https://raw.githubusercontent.com/AviSoori1x/makeMoE/main/images/noisytopkgating.png" width="700"/>
<pre><code class="language-python">
<span class="hljs-comment">#noisy top-k gating</span>
<span class="hljs-keyword">class</span> <span class="hljs-title class_">NoisyTopkRouter</span>(nn.Module):
<span class="hljs-keyword">def</span> <span class="hljs-title function_">__init__</span>(<span class="hljs-params">self, n_embed, num_experts, top_k</span>):
<span class="hljs-built_in">super</span>(NoisyTopkRouter, self).__init__()
self.top_k = top_k
<span class="hljs-comment">#layer for router logits</span>
self.topkroute_linear = nn.Linear(n_embed, num_experts)
self.noise_linear =nn.Linear(n_embed, num_experts)
<span class="hljs-keyword">def</span> <span class="hljs-title function_">forward</span>(<span class="hljs-params">self, mh_output</span>):
<span class="hljs-comment"># mh_ouput is the output tensor from multihead self attention block</span>
logits = self.topkroute_linear(mh_output)
<span class="hljs-comment">#Noise logits</span>
noise_logits = self.noise_linear(mh_output)
<span class="hljs-comment">#Adding scaled unit gaussian noise to the logits</span>
noise = torch.randn_like(logits)*F.softplus(noise_logits)
noisy_logits = logits + noise
top_k_logits, indices = noisy_logits.topk(self.top_k, dim=-<span class="hljs-number">1</span>)
zeros = torch.full_like(noisy_logits, <span class="hljs-built_in">float</span>(<span class="hljs-string">'-inf'</span>))
sparse_logits = zeros.scatter(-<span class="hljs-number">1</span>, indices, top_k_logits)
router_output = F.softmax(sparse_logits, dim=-<span class="hljs-number">1</span>)
<span class="hljs-keyword">return</span> router_output, indices
</code></pre>
<p>Now both noisy-top-k gating and experts can be combined to create a sparse Mixture of Experts module. Note that weighted summation calculation has been incorporated to yield the output for each token in the forward pass.</p>
<pre><code class="language-python"><span class="hljs-comment">#Now create the sparse mixture of experts module</span>
<span class="hljs-keyword">class</span> <span class="hljs-title class_">SparseMoE</span>(nn.Module):
<span class="hljs-keyword">def</span> <span class="hljs-title function_">__init__</span>(<span class="hljs-params">self, n_embed, num_experts, top_k</span>):
<span class="hljs-built_in">super</span>(SparseMoE, self).__init__()
self.router = NoisyTopkRouter(n_embed, num_experts, top_k)
self.experts = nn.ModuleList([Expert(n_embed) <span class="hljs-keyword">for</span> _ <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(num_experts)])
self.top_k = top_k
<span class="hljs-keyword">def</span> <span class="hljs-title function_">forward</span>(<span class="hljs-params">self, x</span>):
gating_output, indices = self.router(x)
final_output = torch.zeros_like(x)
<span class="hljs-comment"># Reshape inputs for batch processing</span>
flat_x = x.view(-<span class="hljs-number">1</span>, x.size(-<span class="hljs-number">1</span>))
flat_gating_output = gating_output.view(-<span class="hljs-number">1</span>, gating_output.size(-<span class="hljs-number">1</span>))
<span class="hljs-comment"># Process each expert in parallel</span>
<span class="hljs-keyword">for</span> i, expert <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(self.experts):
<span class="hljs-comment"># Create a mask for the inputs where the current expert is in top-k</span>
expert_mask = (indices == i).<span class="hljs-built_in">any</span>(dim=-<span class="hljs-number">1</span>)
flat_mask = expert_mask.view(-<span class="hljs-number">1</span>)
<span class="hljs-keyword">if</span> flat_mask.<span class="hljs-built_in">any</span>():
expert_input = flat_x[flat_mask]
expert_output = expert(expert_input)
<span class="hljs-comment"># Extract and apply gating scores</span>
gating_scores = flat_gating_output[flat_mask, i].unsqueeze(<span class="hljs-number">1</span>)
weighted_output = expert_output * gating_scores
<span class="hljs-comment"># Update final output additively by indexing and adding</span>
final_output[expert_mask] += weighted_output.squeeze(<span class="hljs-number">1</span>)
<span class="hljs-keyword">return</span> final_output
</code></pre>
<p>This can now be combined with multihead self attention to create a sparse MoE transformer block.</p>
<pre><code class="language-python"><span class="hljs-keyword">class</span> <span class="hljs-title class_">SparseMoEBlock</span>(nn.Module):
<span class="hljs-keyword">def</span> <span class="hljs-title function_">__init__</span>(<span class="hljs-params">self, n_embd, num_heads, num_experts, top_k, dropout=<span class="hljs-number">0.1</span>, is_decoder=<span class="hljs-literal">False</span></span>):
<span class="hljs-built_in">super</span>().__init__()
<span class="hljs-comment"># Layer normalization for the input to the attention layer</span>
self.ln1 = nn.LayerNorm(n_embd)
<span class="hljs-comment"># Multi-head attention module</span>
self.attn = MultiHeadAttention(n_embd, num_heads, dropout, is_decoder)
<span class="hljs-comment"># Layer normalization for the input to the FFN</span>
self.ln2 = nn.LayerNorm(n_embd)
<span class="hljs-comment"># Feed-forward neural network (FFN)</span>
self.sparseMoE = SparseMoE(n_embd, num_experts, top_k)
<span class="hljs-keyword">def</span> <span class="hljs-title function_">forward</span>(<span class="hljs-params">self, x</span>):
original_x = x <span class="hljs-comment"># Save the input for the residual connection</span>
<span class="hljs-comment"># Apply layer normalization to the input</span>
x = self.ln1(x)
<span class="hljs-comment"># Apply multi-head attention</span>
attn_output = self.attn(x)
<span class="hljs-comment"># Add the residual connection (original input) to the attention output</span>
x = original_x + attn_output
<span class="hljs-comment"># Apply layer normalization to the input to the FFN</span>
x = self.ln2(x)
<span class="hljs-comment"># Apply the FFN</span>
sparseMoE_output = self.sparseMoE(x)
<span class="hljs-comment"># Add the residual connection (input to FFN) to the FFN output</span>
x = x + sparseMoE_output
<span class="hljs-keyword">return</span> x
</code></pre>
<p>Now we combine the sparse MoE transformer architecture language decoder model that has been modified to accomodate 'visual tokens' created by the vision-language projector module. Typically, the decoder language model (sparse MoE or dense) will be kept unmodified and will receive embeddings, I've incorportated the vision-langauge projector to the model architecture to keep things simple. A detailed write-up is found in this blog: <a href="https://huggingface.co/blog/AviSoori1x/seemore-vision-language-model">https://huggingface.co/blog/AviSoori1x/seemore-vision-language-model</a></p>
<pre><code class="language-python"><span class="hljs-keyword">class</span> <span class="hljs-title class_">MoEDecoderLanguageModel</span>(nn.Module):
<span class="hljs-keyword">def</span> <span class="hljs-title function_">__init__</span>(<span class="hljs-params">self, n_embd, image_embed_dim, vocab_size, num_heads, n_layer, num_experts, top_k, use_images=<span class="hljs-literal">False</span></span>):
<span class="hljs-built_in">super</span>().__init__()
self.use_images = use_images
<span class="hljs-comment"># Token embedding table</span>
self.token_embedding_table = nn.Embedding(vocab_size, n_embd)
<span class="hljs-comment"># Position embedding table</span>
self.position_embedding_table = nn.Embedding(<span class="hljs-number">1000</span>, n_embd)
<span class="hljs-keyword">if</span> use_images:
<span class="hljs-comment"># Image projection layer to align image embeddings with text embeddings</span>
self.image_projection = MultiModalProjector(n_embd, image_embed_dim)
<span class="hljs-comment"># Stack of transformer decoder blocks</span>
self.sparseMoEBlocks = nn.Sequential(*[SparseMoEBlock(n_embd, num_heads, num_experts, top_k, is_decoder=<span class="hljs-literal">True</span>) <span class="hljs-keyword">for</span> _ <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(n_layer)])
<span class="hljs-comment"># Final layer normalization</span>
self.ln_f = nn.LayerNorm(n_embd)
<span class="hljs-comment"># Language modeling head</span>
self.lm_head = nn.Linear(n_embd, vocab_size)
<span class="hljs-keyword">def</span> <span class="hljs-title function_">forward</span>(<span class="hljs-params">self, idx, image_embeds=<span class="hljs-literal">None</span>, targets=<span class="hljs-literal">None</span></span>):
<span class="hljs-comment"># Get token embeddings from the input indices</span>
tok_emb = self.token_embedding_table(idx)
<span class="hljs-keyword">if</span> self.use_images <span class="hljs-keyword">and</span> image_embeds <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-literal">None</span>:
<span class="hljs-comment"># Project and concatenate image embeddings with token embeddings</span>
img_emb = self.image_projection(image_embeds).unsqueeze(<span class="hljs-number">1</span>)
tok_emb = torch.cat([img_emb, tok_emb], dim=<span class="hljs-number">1</span>)
<span class="hljs-comment"># Get position embeddings</span>
pos_emb = self.position_embedding_table(torch.arange(tok_emb.size(<span class="hljs-number">1</span>), device=device)).unsqueeze(<span class="hljs-number">0</span>)
<span class="hljs-comment"># Add position embeddings to token embeddings</span>
x = tok_emb + pos_emb
<span class="hljs-comment"># Pass through the transformer decoder blocks</span>
x = self.sparseMoEBlocks(x)
<span class="hljs-comment"># Apply final layer normalization</span>
x = self.ln_f(x)
<span class="hljs-comment"># Get the logits from the language modeling head</span>
logits = self.lm_head(x)
<span class="hljs-keyword">if</span> targets <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-literal">None</span>:
<span class="hljs-keyword">if</span> self.use_images <span class="hljs-keyword">and</span> image_embeds <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-literal">None</span>:
<span class="hljs-comment"># Prepare targets by concatenating a dummy target for the image embedding</span>
batch_size = idx.size(<span class="hljs-number">0</span>)
targets = torch.cat([torch.full((batch_size, <span class="hljs-number">1</span>), -<span class="hljs-number">100</span>, dtype=torch.long, device=device), targets], dim=<span class="hljs-number">1</span>)
<span class="hljs-comment"># Compute the cross-entropy loss</span>
loss = F.cross_entropy(logits.view(-<span class="hljs-number">1</span>, logits.size(-<span class="hljs-number">1</span>)), targets.view(-<span class="hljs-number">1</span>), ignore_index=-<span class="hljs-number">100</span>)
<span class="hljs-keyword">return</span> logits, loss
<span class="hljs-keyword">return</span> logits
<span class="hljs-keyword">def</span> <span class="hljs-title function_">generate</span>(<span class="hljs-params">self, idx, image_embeds, max_new_tokens</span>):
<span class="hljs-comment"># The autoregressive character level generation function is just like in any other decoder model implementation</span>
<span class="hljs-keyword">return</span> generated
</code></pre>
<p>Now that we have our three key components, we can put it all together into a sparse Mixture of Experts Vision Language Model. The full implementation is given below. If you were to remove the assert statements for error handling, this is very simple. Coming back full circle to the outline I’ve given at the beginning of the blog, all that’s happening here is:</p>
<ol>
<li><p>Get image features from the vision encoder (Here it’s a vision transformer, but it could be any model that could generate features from an image input such as a ResNet or a traditional convolutional neural network (needless to say performance may suffer))</p>
</li>
<li><p>A projection module for projecting image tokens to the same embedding space as text embeddings for the decoder (this projector is integrated with the decoder in this implementation)</p>
</li>
<li><p>A decoder language model with a sparseMoE architecture for generating text conditioned on a preceding image.</p>
</li>
</ol>
<pre><code class="language-python"><span class="hljs-keyword">class</span> <span class="hljs-title class_">VisionMoELanguageModel</span>(nn.Module):
<span class="hljs-keyword">def</span> <span class="hljs-title function_">__init__</span>(<span class="hljs-params">self, n_embd, image_embed_dim, vocab_size, n_layer, img_size, patch_size, num_heads, num_blks, emb_dropout, blk_dropout, num_experts, top_k</span>):
<span class="hljs-built_in">super</span>().__init__()
<span class="hljs-comment"># Set num_hiddens equal to image_embed_dim</span>
num_hiddens = image_embed_dim
<span class="hljs-comment"># Assert that num_hiddens is divisible by num_heads</span>
<span class="hljs-keyword">assert</span> num_hiddens % num_heads == <span class="hljs-number">0</span>, <span class="hljs-string">"num_hiddens must be divisible by num_heads"</span>
<span class="hljs-comment"># Initialize the vision encoder (ViT)</span>
self.vision_encoder = ViT(img_size, patch_size, num_hiddens, num_heads, num_blks, emb_dropout, blk_dropout)
<span class="hljs-comment"># Initialize the language model decoder (DecoderLanguageModel)</span>
self.decoder = MoEDecoderLanguageModel(n_embd, image_embed_dim, vocab_size, num_heads, n_layer,num_experts, top_k, use_images=<span class="hljs-literal">True</span>)
<span class="hljs-keyword">def</span> <span class="hljs-title function_">forward</span>(<span class="hljs-params">self, img_array, idx, targets=<span class="hljs-literal">None</span></span>):
<span class="hljs-comment"># Get the image embeddings from the vision encoder</span>
image_embeds = self.vision_encoder(img_array)
<span class="hljs-comment"># Check if the image embeddings are valid</span>
<span class="hljs-keyword">if</span> image_embeds.nelement() == <span class="hljs-number">0</span> <span class="hljs-keyword">or</span> image_embeds.shape[<span class="hljs-number">1</span>] == <span class="hljs-number">0</span>:
<span class="hljs-keyword">raise</span> ValueError(<span class="hljs-string">"Something is wrong with the ViT model. It's returning an empty tensor or the embedding dimension is empty."</span>)
<span class="hljs-keyword">if</span> targets <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-literal">None</span>:
<span class="hljs-comment"># If targets are provided, compute the logits and loss</span>
logits, loss = self.decoder(idx, image_embeds, targets)
<span class="hljs-keyword">return</span> logits, loss
<span class="hljs-keyword">else</span>:
<span class="hljs-comment"># If targets are not provided, compute only the logits</span>
logits = self.decoder(idx, image_embeds)
<span class="hljs-keyword">return</span> logits
<span class="hljs-keyword">def</span> <span class="hljs-title function_">generate</span>(<span class="hljs-params">self, img_array, idx, max_new_tokens</span>):
<span class="hljs-comment"># Get the image embeddings from the vision encoder</span>
image_embeds = self.vision_encoder(img_array)
<span class="hljs-comment"># Check if the image embeddings are valid</span>
<span class="hljs-keyword">if</span> image_embeds.nelement() == <span class="hljs-number">0</span> <span class="hljs-keyword">or</span> image_embeds.shape[<span class="hljs-number">1</span>] == <span class="hljs-number">0</span>:
<span class="hljs-keyword">raise</span> ValueError(<span class="hljs-string">"Something is wrong with the ViT model. It's returning an empty tensor or the embedding dimension is empty."</span>)
<span class="hljs-comment"># Generate new tokens using the language model decoder</span>
generated_tokens = self.decoder.generate(idx, image_embeds, max_new_tokens)
<span class="hljs-keyword">return</span> generated_tokens
</code></pre>
<p>Now back to where we started. The above VisionMoELanguageModel class neatly wraps up all the components we set out to put together.</p>
<img alt="seemore" height="500" src="https://github.com/AviSoori1x/seemore/blob/main/images/moevlm.png?raw=true" width="500"/>
<p> The training loop is exactly as you see in the original vision language blog linked at the top. Please check out seeMoE_from_Scratch.ipynb in the following repo: <a href="https://github.com/AviSoori1x/seemore" rel="nofollow">https://github.com/AviSoori1x/seemore</a></p>
<p>PS: There are newer approaches such as mixed-modal early-fusion models e.g. <a href="https://arxiv.org/abs/2405.09818" rel="nofollow">https://arxiv.org/abs/2405.09818</a>. I plan to implement a simplistic version of this in the future. </p>
<p>Thanks for reading!</p>
|
Shape Rotation 101: An Intro to Einsum and Jax Transformers | https://hf.co/blog/dejavucoder/einsum | Attention block |
Open-source embeddings and LLMs outperform Gemini and OpenAI for Web Navigation while being faster and cheaper | https://hf.co/blog/dhuynh95/evaluating-open-source-and-closed-models | Conclusion |
Recommendation to Revisit the Diffuser Default LoRA Parameters | https://hf.co/blog/alvdansen/revisit-diffusers-default-params |
<p>
Over the last year I have trained hundreds of LoRA finetunes with SDXL, and in the short time that I've spent back in the consulting space, I have tested with over a dozen startup apps that offer finetuning services on their platforms. I have seen, very consistently, the same general quality results from these training programs. </p>
<ul>
<li>If the style is generic, loud, and in particular 3D, the style generally looks acceptable to the average viewer.</li>
<li>Concepts in the original dataset are nearly always slightly overfit, and sometimes incredibly overfit, appearing in most images to a greater or lesser degree.</li>
<li>Minimalist styles, realistic photography styles, and human-created datasets start to fall apart.</li>
<li>Degradation is very obvious when you zoom in on the edges of many images.</li>
</ul>
<p>Some examples:
In this example you can see the level of fidelity, however the prompt was "a small boy" - although this character was in a reasonably size dataset, the concept became overfit before the fidelity of the model itself degraded. This indicates to me too fast training.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/7hsSK6Y2RJyAlRr2CYBoO.webp" rel="nofollow"><img alt="image/webp" src="https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/7hsSK6Y2RJyAlRr2CYBoO.webp"/></a></p>
<p>Alternatively, this image exhibits not just slight concept overfitting, but also you can see the line degradation. This can be hard to discern, because early overfit line degradation looks similar to underfit. The key is if you also find concepts look overfit, which would not be present in a truly underfit model.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/OXY60Oyl7mej31e1468SD.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/OXY60Oyl7mej31e1468SD.png"/></a></p>
<p>Additionally, you can see in this example of realism that the more nuanced details which make realism convincing were not learned, despite broader concepts being understood. I speculate this is another result of training that is too fast and strong.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/U_0-2uevCvMbbLp-LU9Ba.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/U_0-2uevCvMbbLp-LU9Ba.png"/></a></p>
<p>I was initially puzzled - why would training across the space so consistently have the same set of problems, particularly when, in my experience, those problems are generally avoidable? So - I started asking questions. I was lucky enough to start training SDXL by initially using The Last Ben's runpod notebook, which has very reasonable suggested settings. However, I remembered that I had been presented with the Diffusers preset during early tests and originally tried to work with them. So - I started recently asking founders - are you working from the Diffusers presets?</p>
<p>The answer was, of course, yes.</p>
<p><a href="https://huggingface.co/docs/diffusers/en/training/lora">For context, here are the presets from the Diffusers LoRA blog.</a></p>
<p>In my experience, these produce fast results but the results are inferior. However, the solution is a simple one. In my opinion, the Unet and Text Encoder learning rate are set much too high. This leads to very fast training, and unless one is training a somewhat generic concept whose dataset has been through a VAE before, I find that the results are often a complete failure. Even in the instances where the overall style is captured, I consistently see issues in fidelity, prompt coherence, and overfitting of concepts. Additionally, because the learning rate is much higher, there is a natural need to reduce the overall training steps. I find this also forces quick learning that does not result in a model that I personally would be happy with.</p>
<p>From a practical standpoint, it is unrealistic for a startup to test every training edge case. Additionally, AI datasets are much easier to curate, so in my experience, many startups are relying on them for at least part of their training process. Diffusers also is an important resource, so it seems natural to rely on their presets as a good jumping-off point. In my opinion, this is leading to widespread adoption of poor training practices.</p>
<p>(As a side note, the presets also set 512 as the image resolution for training, when it should almost certainly be 1024 for SDXL.)</p>
<p>Here is my suggestion for a revised preset:</p>
<pre><code>resolution: 1024
train batch size: 4
max training steps: [# of images x 250]*
Unet LR: 5e-5
Text Encoder LR: 1e-5
</code></pre>
<p>To preserve complex details, I will often raise the overall step count and reduce the learning rate incrementally. I typically will not go lower than 9e-7 and keep the text encoder at a lower rate than the Unet. I find subjects need more focused text encoder training than styles.</p>
<ul>
<li>It may not serve some, however, I think it is actually imperative to use stops in training with results at 60, 90, 120, 150, 180, 210 steps per image in the dataset, or other similarly spaced out steps. I think it is unrealistic to expect every dataset to need the same step count, even within the same style, and also give users the sense of control over the final results. If you cannot do this, you may find that stopping closer to 150 is better in many cases.</li>
</ul>
<p>Positive Examples:</p>
<p>In this example you can see that the linework is very clean, and although digital, does not have the markings of the melty effect that obviously AI generated linework can have when it is slightly overfit.
<a href="https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/h8rhG3WHo2830KUxoq71W.jpeg" rel="nofollow"><img alt="image/jpeg" src="https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/h8rhG3WHo2830KUxoq71W.jpeg"/></a>)</p>
<p>Not only not overfit on concept, this image clearly shows the level of fidelity that is achievable by slowind down a training.
<a href="https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/3bQoP1gMmyIXGsxMOwEiu.webp" rel="nofollow"><img alt="image/webp" src="https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/3bQoP1gMmyIXGsxMOwEiu.webp"/></a></p>
<p>In conclusion, I beleive it would benefit the whole community if the default Diffuser parameters were revisited. I hope this helps, and certainly would love to hear what results others get with my approach. I don't think my method is definitive, however I think adoption of more reasonable presets would lead to better results throughout the space, and Diffusers has a responsibility to suggest a preset that leads to more successes.</p>
|
Introducing Synthetic Data Workshop: Your Gateway to Easy Synthetic Dataset Creation | https://hf.co/blog/davanstrien/synthetic-data-workshop | Next steps |
Extracting Concepts from LLMs: Anthropic’s recent discoveries 📖 | https://hf.co/blog/m-ric/extracting-concepts-from-llms | 4+. Moving forward |
Enhancing Image Model Dreambooth Training Through Effective Captioning: Key Observations | https://hf.co/blog/alvdansen/enhancing-lora-training-through-effective-captions |
<p>
In the realm of Dreambooth and LoRA training, especially when fine-tuning models for SDXL, the nuances of how you approach captioning can significantly impact the model's performance. Here are five key observations based on my experiences that can guide you in optimizing your training data for more precise and desirable outcomes.</p>
<p><strong>Observation #1: The Purpose of Captions in Training Data</strong></p>
<p>When fine-tuning a model that already has a robust baseline of knowledge, the role of captions is crucial. Captions emphasize learning lessons by:</p>
<p>Naming specific aspects of the image intended for training.</p>
<p>Assigning unique associations to words, enhancing the model's understanding and recall.</p>
<p><strong>Observation #2: Frequency is Important</strong></p>
<p>The frequency with which a word or phrase appears across multiple captions in the dataset signals its importance. Consistent use of a term emphasizes its relationship with the concept you're training. Conversely, the absence of that word in future prompts might lead to features associated with it not appearing, affecting the model's output.</p>
<p><strong>Observation #3: Don't Describe Everything</strong></p>
<p>Describing every element in an image doesn't necessarily improve the model. Effective captioning involves:</p>
<p>Naming elements that aren’t directly tied to the concept or style being trained, especially if these words don’t repeat throughout your captions.</p>
<p>Repeating words or phrases that you want to prompt with specific results in the future.</p>
<p><strong>Observation #4: Use Different Formats</strong></p>
<p>A mix of different caption formats tends to yield the best results. These include:</p>
<p>Narrative style: "portrait of a girl with a red hat and blue sunglasses."</p>
<p>List style: "girl, red hat, blue sunglasses, portrait."</p>
<p>Simple: "girl."</p>
<p>Additionally, including a unique token can strengthen the concept and facilitate easier recall.</p>
<p><strong>Observation #5: Don’t Name the Style</strong></p>
<p>Naming the style, such as "illustration" or "photography," is generally useful only if you aim to train a character without tying it to a specific style. Otherwise, naming the style can dilute the training, leading to less satisfactory results. Styles already have extensive context within the AI model, and trying to change this context can be challenging, especially with models like SDXL.</p>
<p>By strategically using captions, varying formats, and considering the frequency and specificity of terms, you can significantly enhance the performance and accuracy of your LoRA model.</p>
|
Unveiling CIVICS: A New Dataset for Examining Cultural Values in Language Models | https://hf.co/blog/giadap/civics | Future Directions |
Introducing the Ultimate SEC LLM: Revolutionizing Financial Insights with Llama-3-70B | https://hf.co/blog/Crystalcareai/llama-3-sec | References |
Train a Terrible Tic-Tac-Toe AI | https://hf.co/blog/nroggendorff/ttt-ai | This is a dumb project, and it won't work |
Thoughts on LoRA Training Pt 2: Where to Train | https://hf.co/blog/alvdansen/thoughts-on-lora-training-pt-2-training-services |
<p>
This is a pretty quick follow up, but there were some immediate "where do I start" questions I want to answer.</p>
<p>First and foremost, if you have <strong>never</strong> trained a LoRA before, start somewhere that has presets - as in a notebook or platform that has preset trianing parameter values. I beg of you. </p>
<p>The reason I insist on that is - if you go in and adjust settings before you know if you understand how to curate a dataset and captions, then you will never known if the issue is your data or your parameters. It will be a nightmare to diagnose and very discouraging. Also, most of these platforms or default settings aren't arbitrary - they've worked for <em>someone</em> at some point, so presuably you should be able to get them to work!</p>
<p>So without further hesitation, here are a list of tools I've used and some I haven't but I've seen good results from.</p>
<p><strong>OPEN SOURCE</strong></p>
<p>I think open source can be the way to go - it is certainly how I started - and it just depends on your willingness to sacrifice time for utility in many cases.</p>
<p><a href="“https://github.com/TheLastBen/fast-stable-diffusion”" rel="nofollow">The Last Ben Runpod</a></p>
<p>The simplest and TLB is fantastic. Really, if you want to just not overthink things and get started, use his runpod. I recommend keeping your captions super short with a <strong>unique token</strong> + <strong>a couple words description</strong> format. If that, honestly, Ben has always recommended just to use the unique token. Either works.</p>
<p>The only challenge of just using the unique token is that you will need to ensure you have a balanced dataset. As in - enough images but not too many (10-30), no doubles, consistency of the concepts you want to keep and variability of the things you don't want the model to learn.</p>
<p><a href="“https://github.com/bmaltais/kohya_ss”" rel="nofollow">Koya</a></p>
<p>Kohya is also great - I just hesitate to put it in front of new people because it allows you to adjust so much. I feel like that can be a recipe for disaster. But, ultimately, a gold standard.</p>
<p><strong>PAID SERVICES</strong></p>
<p>Truth be told, I haven't met a silver bullet platform, but here are all the ones I am aware of with notes on whether I've used them.</p>
<p><strong>Scenario</strong> - I actually helped develop the training presets at Scenario, so I quite like it. I believe it is still unlimited trainings for account holders, however there isn't a way to export your models which I find really challenging. I usually just use the presets.</p>
<p><strong>CivitAI</strong> - I don't love the vibe of Civit, but I would be misleading you if I said their training doesn't work. I even trust their crazy autocaptions. Typically I do 20 epochs with 17 repeats and Clip Skip of 2.</p>
<p><strong>Pimento</strong> - I am pretty impressed with the presets they have for illustration on Pimento. I haven't dug into it too deep, but ultimately I think it has a lot of promise as a service.</p>
<p><strong>Leonardo</strong> - I haven't spent a ton of time using Leonardo for training, but I wasn't wholey disappointed when I did. I do think their finetuning team makes amazing things and would love to see their training updated more to reflect that.</p>
<p><strong>Layer</strong> - I haven't used their service much myself, but I have heard people get really strong results.</p>
<p><strong>EverArt</strong> - For people who want a really really simple setup without too many changes, I think EverArt shows a lot of promise.</p>
<p><strong>Astria</strong> - I actually trained my first time on Astria in 2022, and I think they have the consistent character/object set up really well established. I haven't revisited it yet but it is on my list.</p>
<p>So - hopefully this list helps you!</p>
|
Thoughts on LoRA Training #1 | https://hf.co/blog/alvdansen/thoughts-on-lora-training-1 |
<p>
I talk to many people about training LoRAs, from a variety of backgrounds. Some are very new to it, while others are well-established with impressive model portfolios. I aim to make this a series of posts, and possibly an article, discussing my thoughts on LoRA training and my suggestions.</p>
<p>For my part, I would consider myself a curative and artistically driven finetuner. The technical specs that concern me the most are the ones that start with the dataset, how the concepts in the dataset relate to each other and the text captions, and how that functions within the context of the model being finetuned. So, my focus on the technical specs of the parameters is very utilitarian - I want parameters that work well enough that I can leverage the dataset and visual information to achieve strong results. As evidenced by me elementary use of syntax in this article.</p>
<p>One of the biggest issues I often observe is people trying to do too much all at once. By this, I mean coming into a training situation without any previous fine-tuning or art experience, and attempting to adjust all the parameters and tweak settings simultaneously.</p>
<p>I have seen different fine-tuning parameters achieve equally impressive results. The difference often lies more in the quality of the dataset and the accompanying text captions, and how the decisions you make about these two elements relate to the parameters you are using.</p>
<p>Here are some of my rules of thumb, which I have used on various training setups with generally good results:</p>
<ul>
<li><p>I use 20-30 images for style training and 10-20 images for a character.</p>
</li>
<li><p>I use a mix of captions: about 1/3 in a narrative sentence structure, 1/3 as a long list of attributes seen in the related image, and 1/3 as a single word. I have done this both with and without a unique token, but I prefer including a unique token in case I want to add extra weight.</p>
</li>
<li><p>For datasets that are primarily AI-generated, I find it takes less time to train.</p>
</li>
<li><p>Illustrated datasets that are unique or minimalistic take more time to train, regardless of whether they are AI-generated.</p>
</li>
<li><p>Handmade/human-origin datasets take longer to train.</p>
</li>
<li><p>For SDXL, some of the attributes seen in overfitting also appear in underfitting (loss of fidelity, linework degradation, etc.). However, I find it easier to confirm underfitting by shortening my training and rerunning it. If the style breaks and becomes more generic, it is underfit. If it resolves, it was overfit.</p>
</li>
</ul>
<p>These are my general thoughts. I am not the best at planning out this kind of content for myself—ironically, as most of my work involves creating similar content for companies—but I will keep trying to expand on this with examples.</p>
<p>You can find a follow up on where to train here: <a href="https://huggingface.co/blog/alvdansen/thoughts-on-lora-training-pt-2-training-services">https://huggingface.co/blog/alvdansen/thoughts-on-lora-training-pt-2-training-services</a></p>
|
MobileNet-V4 (now in timm) | https://hf.co/blog/rwightman/mobilenetv4 | PyTorch Implementation |
Against mixing environment setup with code | https://hf.co/blog/ucheog/separate-env-setup-from-code | Use python-dotenv [say what?!] |
SwanLab and Transformers: Power Up Your NLP Experiments | https://hf.co/blog/Andyrasika/swanlab-transformers | Conclusion |
CryptGPT: Privacy-Preserving Language Models Using Vigenere Cipher (Part 1) | https://hf.co/blog/diwank/cryptgpt-part1 | A Challenge for Cryptanalysts and LLM Researchers |
The CVPR Survival Guide: Discovering Research That's Interesting to YOU! | https://hf.co/blog/harpreetsahota/cvpr2024-survival-guide | 🔊 Now, let's check all this out in the app! Turn your audio on because I'll explain what I'm doing! |
Uncensor any LLM with abliteration | https://hf.co/blog/mlabonne/abliteration | References |
Low Latency CPU Based Educational Value Classifier With Generic Educational Value | https://hf.co/blog/kenhktsui/edu-value-classifier-cpu | Citation |
An Optimal Lossy Variant of Speculative Decoding | https://hf.co/blog/vivien/optimal-lossy-variant-of-speculative-decoding | Conclusion and further work |
Reports on the Hub: A First Look at Self-governance in Open Source AI Development | https://hf.co/blog/frimelle/self-governance-open-source-ai |
<p>
<img alt="" src="https://cdn-uploads.huggingface.co/production/uploads/6531310497d7f1b4a083de7b/mWAsYQK9yJPaQRY86ijLc.png" style="display: block; margin: auto;"/></p>
<p>Hugging Face has a unique position as the most widely used open-source platform for AI models. As in many open-source projects, one of the invaluable contributions of the community is maintenance. At Hugging Face, this work includes reporting issues with models and datasets, clarifying problems with the uploaders, and helping to resolve these discussions.</p>
<p>In open source software development, "<a href="https://en.wikipedia.org/wiki/Linus%27s_law" rel="nofollow">given enough eyeballs, all bugs are shallow</a>”. At Hugging Face and in open source model development, given enough eyeballs, ML models can become good, adjusted to the needs of the different communities, and less prone to unintentional mistakes. </p>
<p>When looking at the reports the community creates, we find interesting insights into the self-governance of the Hugging Face community. While reports are a subset of <a href="https://huggingface.co/docs/hub/repositories-pull-requests-discussions">discussions and pull requests</a>, they focus on non-technical issues, i.e., the model works but the report could focus on ethical, legal, or other issues.</p>
<p><img alt="" src="https://cdn-uploads.huggingface.co/production/uploads/6531310497d7f1b4a083de7b/26-uVSuLjweZuG3Kka135.png" style="display: block; margin: auto;" width="500"/></p>
<p style="text-align: center;"><i>The reporting interface for a dataset. This creates a public report, which can be found in the community tab.<br/>Reports are marked there with the 🚩 reports flag, they are a subset of the discussions and pull requests.</i></p>
<p><img alt="" src="https://cdn-uploads.huggingface.co/production/uploads/6531310497d7f1b4a083de7b/DhZHhfVtNAInqGeUfEFWu.png" style="display: block; margin: auto;" width="600"/></p>
<p style="text-align: center;"><i>Centering the community, the community tab exists along the dataset/model documentation and files.</i></p>
<p>Many parts of the hub are <a href="https://huggingface.co/docs/huggingface_hub/v0.17.0.rc0/en/guides/community">accessible through the API</a>, including the discussions and reports on the community tab. For this preliminary investigation of reports on the hub, all models and dataset repos are listed and the discussions are filtered by the 🚩 reports flag, to find all reports opened by the community. This information is publicly accessible and builds the base for further investigation of community governance, interaction, and self-organisation. Currently, there is a total of 565 reports (both open and closed) across both models and datasets. Given the large number of public model and dataset repos (774,384 as part of this report), the number of reports is relatively low. </p>
<p>In the reports pertaining to model repositories, among everyone opening, commenting on, and contributing to these reports, only 4% of users have a Hugging Face affiliation, i.e., <strong>96% of users interacting with model reports are part of the larger community</strong>.</p>
<p>Across the 436 (model repos) and 129 (dataset repos) reports, a majority of the reports are closed by a member of the Hugging Face community, i.e., not an employee, indicating the community working together. Many reports do not need intervention by Hugging Face; they are addressed and taken care of by the repo owner, i.e., the person uploading a model or dataset, or another member of the Hugging Face community.</p>
<p><img alt="" src="https://cdn-uploads.huggingface.co/production/uploads/6531310497d7f1b4a083de7b/9FaxpGsV7HK-ApYnLqwzx.png" style="display: block; margin: auto;" width="600"/></p>
<p style="text-align: center;"><i>Overview of who closes reports in model and dataset repos. A majority of reports in both repo types are closed by community members, not Hugging Face staff.</i></p>
<p>The topics of the reports, which the community closes themselves, vary and display the wide range of discussion topics that derive from an active open source ML community. </p>
<p><img alt="" src="https://cdn-uploads.huggingface.co/production/uploads/6531310497d7f1b4a083de7b/OXawuXs8Nrpo31tnRxYnp.png" style="display: block; margin: auto;" width="700"/></p>
<p style="text-align: center;"><i>Topics of reports that were closed by the community, removed reports that have very short descriptions (< 3 words) or are hidden.</i></p>
<p>A good example of the community leveraging the technical capabilities of the platform is the <a href="https://huggingface.co/docs/hub/en/model-cards#how-can-i-indicate-that-my-model-is-not-suitable-for-all-audiences">NFAA tag</a>. Hugging Face has a focus on supporting model and dataset creators to extensively and clearly document their models and datasets, including adding tags for content that is not appropriate for all audiences (NFAA). When those tags are missing, community members point them out to each other (from the reports: “<i>Not For All Audiences; Please add NFAA Repository Label</i>”) and model owners follow prompt in implementing the suggestion (answer to the same report: “<i>Sorry, added</i>”).</p>
<p>As in many open source projects, there are a few dominant actors, who do a lot of the maintenance work, while there are many one-time contributors, which ensures a big-picture perspective from different angles [<a href="https://arxiv.org/abs/2405.13058" rel="nofollow">Osborne et al. 2024</a>]. In the figure of networks of users below, this phenomenon can be well-understood; users only interacting once are users who only interact on a single issue, while there are clusters of users who interact more frequently (and a few clusters of discussions involving multiple users). </p>
<p><img alt="" src="https://cdn-uploads.huggingface.co/production/uploads/6531310497d7f1b4a083de7b/mizzctRozqe4saTngrGDR.png" style="display: block; margin: auto;" width="600"/></p>
<p style="text-align: center;"><i>Network of users commenting on the same issues, where orange are users with Hugging Face affiliation, and light blue are other users.</i></p>
<p>As the community grows, self-governance becomes essential to maintaining a vibrant environment for developing innovative machine learning and ensuring diverse voices are heard. The current trajectory of self-governance on the hub is promising and holds exciting potential for the future of open-source machine learning.</p>
|
Building a Vision Mixture-of-Expert Model from several fine-tuned Phi-3-Vision Models | https://hf.co/blog/mjbuehler/phi-3-vision-cephalo-moe | Citation |
Running Large Multimodal Models on an AI PC's NPU | https://hf.co/blog/bconsolvo/llava-gemma-2b-aipc-npu | Conclusions and calls to action |
Saving Memory Using Padding-Free Transformer Layers during Finetuning | https://hf.co/blog/mayank-mishra/padding-free-transformer | References |
An Analysis of Chinese LLM Censorship and Bias with Qwen 2 Instruct | https://hf.co/blog/leonardlin/chinese-llm-censorship-analysis | Recommendations |
Aligning Large Language Models with BRAIn | https://hf.co/blog/gauravpandey1/brain | Experimental results <a name="experimental" rel="nofollow"></a> |
What CI/CD practitioners know that ML engineers don’t… yet | https://hf.co/blog/Manialgie/what-cicd-practitioners-know-that-ml-engineers-don | TL;DR |
BrAIn: next generation neurons? | https://hf.co/blog/as-cle-bert/brain-next-generation-neurons | References |
Training an Object Detection Model with AutoTrain | https://hf.co/blog/abhishek/object-detection-autotrain | Conclusion |
Orchestrating Small Language Models (SLM) using JavaScript and the Hugging Face Inference API | https://hf.co/blog/rrg92/orchestrating-small-llms-javascript-inference-api | Other Endpoints |
Orquestrando Small Language Models (SLM) usando JavaScript e a API de Inferência do Hugging Face | https://hf.co/blog/rrg92/orquestrando-small-llms-javascript-api-inferencia | Demais endpoints |
Announcing Occiglot-Fineweb | https://hf.co/blog/malteos/occiglot-fineweb | Insights and Next steps |
🦙⚗️ Using Llama3 and distilabel to build fine-tuning datasets | https://hf.co/blog/dvilasuero/synthetic-data-with-llama3-distilabel | Full pipeline code |
Fine-tune and deploy open LLMs as containers using AIKit - Part 1: Running on a local machine | https://hf.co/blog/sozercan/finetune-deploy-aikit-part1 | 📚 Additional Resources |
Virtual Try-On using IP-Adapter Inpainting | https://hf.co/blog/tonyassi/virtual-try-on-ip-adapter | About Me |
LLM数据工程3——数据收集魔法:获取顶级训练数据的方法 | https://hf.co/blog/JessyTsu1/data-collect-zh | 数据版本控制 |
LLM Data Engineering 3——Data Collection Magic: Acquiring Top Training Data | https://hf.co/blog/JessyTsu1/data-collect | Data Version Control |
I ran 580 model-dataset experiments to show that, even if you try very hard, it is almost impossible to know that a model is degrading just by looking at data drift results | https://hf.co/blog/santiviquez/data-drift-estimate-model-performance |
<p>
In my opinion, data drift detection methods are very useful when we want to understand what went wrong with a model, but they are not the right tools to know how my model's performance is doing.</p>
<p>Essentially, using data drift as a proxy for performance monitoring is not a great idea.</p>
<p>I wanted to prove that by giving data drift methods a second chance and trying to get the most out of them. I built a technique that relies on drift signals to estimate model performance and compared its results against the current SoTA performance estimation methods (<a href="https://arxiv.org/abs/2401.08348" rel="nofollow">PAPE [arxiv link]</a> and <a href="https://nannyml.readthedocs.io/en/stable/how_it_works/performance_estimation.html#confidence-based-performance-estimation-cbpe" rel="nofollow">CBPE [docs link]</a>) to see which technique performs best.</p>
<p>To effectively compare data drift signals against performance estimation methods, I used an evaluation framework that emulates a typical production ML model and ran multiple dataset-model experiments.</p>
<p>As per data, I used datasets from the <a href="https://github.com/socialfoundations/folktables" rel="nofollow">Folktables package</a>. (Folktables preprocesses US census data to create a set of binary classification problems.) To make sure the results are not biased, in terms of the nature of the model, I trained different types of models (Linear, Ensemble Boosting) for multiple prediction tasks included in Folktables.</p>
<p>Then, I built a technique that relies on drift signals to estimate model performance. This method uses univariate and multivariate data drift information as features of a DriftSignal model to estimate the performance of the model we monitor. It works as follows:</p>
<ol>
<li>Fit univariate/multivariate drift detection calculator on reference data (test set).</li>
<li>Take the fitted calculators to measure the observed drift in the production set. For univariate drift detection methods, we use Jensen Shannon, Kolmogorov-Smirnov, and Chi2 distance metrics/tests. Meanwhile, we use the <a href="https://nannyml.readthedocs.io/en/stable/how_it_works/multivariate_drift.html#data-reconstruction-with-pca" rel="nofollow">PCA Reconstruction Error</a> and <a href="https://nannyml.readthedocs.io/en/stable/how_it_works/multivariate_drift.html#domain-classifier" rel="nofollow">Domain Classifier</a> for multivariate methods.</li>
<li>Build a DriftSignal model that trains a regression algorithm using the drift results from the reference period as features and the monitored model performance as a target.</li>
<li>Estimate the performance of the monitored model on the production set using the trained DriftSignal model.</li>
</ol>
<p>You can find the full implementation of this method in this <a href="https://gist.github.com/santiviquez/aa224c6e232c8bd2534893888981564d" rel="nofollow">GitHub Gist</a>.</p>
<p>Then, for evaluation, I used a modified version of MAE because I needed an aggregated version that take into consideration the standard deviation of the errors. To account for this, I scale absolute/squared errors by the standard error (SE) calculated for each evaluation case. We call the SE-scaled metrics <strong>mean absolute standard error (MASTE)</strong>.</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th align="center"><a href="https://cdn-uploads.huggingface.co/production/uploads/629a173153a72d997d3f57d0/WzVQMeFvDXsp7S8VpOTl7.webp" rel="nofollow"><img alt="image/webp" src="https://cdn-uploads.huggingface.co/production/uploads/629a173153a72d997d3f57d0/WzVQMeFvDXsp7S8VpOTl7.webp"/></a></th>
</tr>
</thead><tbody><tr>
<td align="center"><em>MASTE formula</em></td>
</tr>
</tbody>
</table>
</div>
<p>Then it was a matter of running all the 580 experiments and collect results.</p>
<p>Since, each performance estimation method is trying to estimate the roc_auc of the monitored model, I report the MASTE between the estimated and realized roc_auc.</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th align="center"><a href="https://cdn-uploads.huggingface.co/production/uploads/629a173153a72d997d3f57d0/oN_W8e9rAMyelY84ItE0e.webp" rel="nofollow"><img alt="image/webp" src="https://cdn-uploads.huggingface.co/production/uploads/629a173153a72d997d3f57d0/oN_W8e9rAMyelY84ItE0e.webp"/></a></th>
</tr>
</thead><tbody><tr>
<td align="center"><em>The methods involved in the analysis are trying to estimate the AUC-ROC of the monitored models. So, after the experiments, we ended up with the estimated and realized AUC-ROC. To compare the error between both, we compute the MASTE (modification of MAE) to evaluate which performance estimation method worked best. That is what this table is reporting.</em></td>
</tr>
</tbody>
</table>
</div>
<p>PAPE seems to be the most accurate method, followed by CBPE. Surprisingly, constant test set performance is the third best. This is closely followed by random forest versions of univariate and multivariate drift signal models.</p>
<p>This plot shows the quality of performance estimation among different methods, including PAPE and CBPE.</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th align="center"><a href="https://cdn-uploads.huggingface.co/production/uploads/629a173153a72d997d3f57d0/OvqnLtlYwNAXQIiX0bUJN.webp" rel="nofollow"><img alt="image/webp" src="https://cdn-uploads.huggingface.co/production/uploads/629a173153a72d997d3f57d0/OvqnLtlYwNAXQIiX0bUJN.webp"/></a></th>
</tr>
</thead><tbody><tr>
<td align="center"><em>Quality of performance estimation (MASTE of roc_auc) vs absolute performance change (SE). (The lower, the better).</em></td>
</tr>
</tbody>
</table>
</div>
<p>Here is a specific time series plot of a model's realized ROC AUC (black) compared against all the performance estimation methods. PAPE (red) accurately estimates the direction of the most significant performance change and closely approximates the magnitude.</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th align="center"><a href="https://cdn-uploads.huggingface.co/production/uploads/629a173153a72d997d3f57d0/1-tRyEBEIj2-TBoQonETC.webp" rel="nofollow"><img alt="image/webp" src="https://cdn-uploads.huggingface.co/production/uploads/629a173153a72d997d3f57d0/1-tRyEBEIj2-TBoQonETC.webp"/></a></th>
</tr>
</thead><tbody><tr>
<td align="center"><em>Time series plot of realized vs estimated roc_auc for dataset ACSIncome (California) and LigthGBM model.</em></td>
</tr>
</tbody>
</table>
</div>
<p>The experiments suggest that there are better tools for detecting performance degradation than data drift, even though I tried my best to extract all the meaningful information from drift signals to create an accurate performance estimation method.</p>
<p>There are better tools for quantifying the impact of data drift on model performance. So, I hope this helps the industry realize that monitoring fine-grained metrics leads to nothing and that a change in an obscure feature might not mean anything. It is better to first estimate model performance and then, if it drops, review data drift results but not the other way around.</p>
<p>Full experiment set up, datasets, models, benchmarking methods, and the code used in the project can be found in this <a href="https://www.nannyml.com/blog/data-drift-estimate-model-performance" rel="nofollow">longer post</a> that I wrote last week.</p>
|