I'm interested.
Rafael
aiisthebest
AI & ML interests
None yet
Organizations
aiisthebest's activity
Reacted to
nicolay-r's
post with π₯
2 months ago
Post
1022
π’ Seriously, We can't go with Big5 or other non structured descriptions to diverse large amount of characters π¨βπ©βπ¦βπ¦ from many books π. Instead, The factorization + open-psychometrics antonyms extracted from dialogues is a key π for automatic character profiling that purely relies on book content π. With that, happy to share delighted to share with you π more on this topic in YouTube video:
https://youtu.be/UQQsXfZyjjc
π From which you will find out:
β How to perform book processing π aimed at personalities extraction
β How to impute personalities π¨βπ©βπ¦βπ¦ and character network for deep learning π€
β How to evaluate π advances / experiment findings π§ͺ
Additional materials:
π Github: https://github.com/nicolay-r/book-persona-retriever
π Paper: https://www.dropbox.com/scl/fi/0c2axh97hadolwphgu7it/rusnachenko2024personality.pdf?rlkey=g2yyzv01th2rjt4o1oky0q8zc&st=omssztha&dl=1
π Google-colab experiments: https://colab.research.google.com/github/nicolay-r/deep-book-processing/blob/master/parlai_gutenberg_experiments.ipynb
π¦ Task: https://github.com/nicolay-r/parlai_bookchar_task/tree/master
https://youtu.be/UQQsXfZyjjc
π From which you will find out:
β How to perform book processing π aimed at personalities extraction
β How to impute personalities π¨βπ©βπ¦βπ¦ and character network for deep learning π€
β How to evaluate π advances / experiment findings π§ͺ
Additional materials:
π Github: https://github.com/nicolay-r/book-persona-retriever
π Paper: https://www.dropbox.com/scl/fi/0c2axh97hadolwphgu7it/rusnachenko2024personality.pdf?rlkey=g2yyzv01th2rjt4o1oky0q8zc&st=omssztha&dl=1
π Google-colab experiments: https://colab.research.google.com/github/nicolay-r/deep-book-processing/blob/master/parlai_gutenberg_experiments.ipynb
π¦ Task: https://github.com/nicolay-r/parlai_bookchar_task/tree/master
Reacted to
appvoid's
post with π₯
2 months ago
Post
1824
meta just released 1b parameters model and to honor it i released arco 2 just in time for the fine-tuners to tweak around, enjoy these small powerful language models!!!
meta-llama/Llama-3.2-1B
appvoid/arco-2
meta-llama/Llama-3.2-1B
appvoid/arco-2
Amazing.
Reacted to
fdaudens's
post with π₯
2 months ago
Post
3294
A big day for multimodal models!
Llama 3.2 is out with a major update: it can now process images.
Key highlights:
β’ 11B and 90B vision models
β’ Small 1B and 3B text models for mobile devices
Eval results already on the leaderboard: open-llm-leaderboard/open_llm_leaderboard
Collection: meta-llama/llama-32-66f448ffc8c32f949b04c8cf
Llama 3.2 is out with a major update: it can now process images.
Key highlights:
β’ 11B and 90B vision models
β’ Small 1B and 3B text models for mobile devices
Eval results already on the leaderboard: open-llm-leaderboard/open_llm_leaderboard
Collection: meta-llama/llama-32-66f448ffc8c32f949b04c8cf
Reacted to
MonsterMMORPG's
post with β€οΈ
2 months ago
Post
2923
Detailed Comparison of JoyCaption Alpha One vs JoyCaption Pre-Alpha β 10 Different Style Amazing Images β I think JoyCaption Alpha One is the very best image captioning model at the moment for model training β Works very fast and requires as low as 8.5 GB VRAM
Where To Download And Install
You can download our APP from here : https://www.patreon.com/posts/110613301
1-Click to install on Windows, RunPod and Massed Compute
Official APP is here where you can try : fancyfeast/joy-caption-alpha-one
Have The Following Features
Auto downloads meta-llama/Meta-Llama-3.1β8B into your Hugging Face cache folder and other necessary models into the installation folder
Use 4-bit quantization β Uses 8.5 GB VRAM Total
Overwrite existing caption file
Append new caption to existing caption
Remove newlines from generated captions
Cut off at last complete sentence
Discard repeating sentences
Donβt save processed image
Caption Prefix
Caption Suffix
Custom System Prompt (Optional)
Input Folder for Batch Processing
Output Folder for Batch Processing (Optional)
Fully supported Multi GPU captioning β GPU IDs (comma-separated, e.g., 0,1,2)
Batch Size β Batch captioning
Where To Download And Install
You can download our APP from here : https://www.patreon.com/posts/110613301
1-Click to install on Windows, RunPod and Massed Compute
Official APP is here where you can try : fancyfeast/joy-caption-alpha-one
Have The Following Features
Auto downloads meta-llama/Meta-Llama-3.1β8B into your Hugging Face cache folder and other necessary models into the installation folder
Use 4-bit quantization β Uses 8.5 GB VRAM Total
Overwrite existing caption file
Append new caption to existing caption
Remove newlines from generated captions
Cut off at last complete sentence
Discard repeating sentences
Donβt save processed image
Caption Prefix
Caption Suffix
Custom System Prompt (Optional)
Input Folder for Batch Processing
Output Folder for Batch Processing (Optional)
Fully supported Multi GPU captioning β GPU IDs (comma-separated, e.g., 0,1,2)
Batch Size β Batch captioning
Reacted to
davanstrien's
post with β€οΈ
2 months ago
Post
3149
ColPali is revolutionizing multimodal retrieval, but could it be even more effective with domain-specific fine-tuning?
Check out my latest blog post, where I guide you through creating a ColPali fine-tuning dataset using Qwen/Qwen2-VL-7B-Instruct to generate queries for a collection of UFO documents sourced from the Internet Archive.
The post covers:
- Introduction to data for ColPali models
- Using Qwen2-VL for retrieval query generation
- Tips for better query generation
Check out the post here:
https://danielvanstrien.xyz/posts/post-with-code/colpali/2024-09-23-generate_colpali_dataset.html
The resulting Hugging Face dataset: davanstrien/ufo-ColPali
Check out my latest blog post, where I guide you through creating a ColPali fine-tuning dataset using Qwen/Qwen2-VL-7B-Instruct to generate queries for a collection of UFO documents sourced from the Internet Archive.
The post covers:
- Introduction to data for ColPali models
- Using Qwen2-VL for retrieval query generation
- Tips for better query generation
Check out the post here:
https://danielvanstrien.xyz/posts/post-with-code/colpali/2024-09-23-generate_colpali_dataset.html
The resulting Hugging Face dataset: davanstrien/ufo-ColPali
Reacted to
dylanebert's
post with πππ₯β€οΈ
2 months ago
Post
2763
Keep track of the latest 3D releases with this spaceπ
dylanebert/research-tracker
Reacted to
davidberenstein1957's
post with πππ₯β€οΈ
2 months ago
Post
2147
π Exciting News: Argilla 2.2.0 is Here! π
We're thrilled to announce the release of Argilla 2.2.0, packed with powerful new features to enhance your data annotation and LLM workflow:
π¨οΈ ChatField: Work with text conversations natively in Argilla. Perfect for building datasets for conversational LLMs!
βοΈ Adjustable Task Distribution: Modify settings on the fly and automatically recalculate completed and pending records.
π Progress Tracking: Monitor annotation progress directly from the SDK, including user-specific metrics.
π§ Automatic Settings Inference: Importing datasets from Hugging Face Hub just got easier with automatic settings detection.
π Task Templates: Jump-start your projects with pre-built templates for common dataset types.
π§ Background Jobs Support: Improved performance for long-running tasks (requires Redis).
Upgrade now and supercharge your data workflows!
Check out our full changelog for more details: https://github.com/argilla-io/argilla/compare/v2.1.0...v2.2.0
We're thrilled to announce the release of Argilla 2.2.0, packed with powerful new features to enhance your data annotation and LLM workflow:
π¨οΈ ChatField: Work with text conversations natively in Argilla. Perfect for building datasets for conversational LLMs!
βοΈ Adjustable Task Distribution: Modify settings on the fly and automatically recalculate completed and pending records.
π Progress Tracking: Monitor annotation progress directly from the SDK, including user-specific metrics.
π§ Automatic Settings Inference: Importing datasets from Hugging Face Hub just got easier with automatic settings detection.
π Task Templates: Jump-start your projects with pre-built templates for common dataset types.
π§ Background Jobs Support: Improved performance for long-running tasks (requires Redis).
Upgrade now and supercharge your data workflows!
Check out our full changelog for more details: https://github.com/argilla-io/argilla/compare/v2.1.0...v2.2.0
Reacted to
aaditya's
post with π€πβ€οΈ
3 months ago
Post
3004
Last Week in Medical AI: Top Research Papers/Models
π (August 25 - August 31, 2024)
- MultiMed: Multimodal Medical Benchmark
- A Foundation model for generating chest X-ray images
- MEDSAGE: Medical Dialogue Summarization
- Knowledge Graphs for Radiology Report Generation
- Exploring Multi-modal LLMs for Chest X-ray
- Improving Clinical Note Generation
...
Check the full thread : https://x.com/OpenlifesciAI/status/1829984701324448051
π (August 25 - August 31, 2024)
- MultiMed: Multimodal Medical Benchmark
- A Foundation model for generating chest X-ray images
- MEDSAGE: Medical Dialogue Summarization
- Knowledge Graphs for Radiology Report Generation
- Exploring Multi-modal LLMs for Chest X-ray
- Improving Clinical Note Generation
...
Check the full thread : https://x.com/OpenlifesciAI/status/1829984701324448051