Yogi

yo
Β·

AI & ML interests

None yet

Recent Activity

View all activity

Organizations

Leafwatch's profile picture

yo's activity

reacted to Draichi's post with πŸ‘€β€οΈ 27 days ago
view post
Post
3520
🏁 Now it is possible to chat with telemetry data from real Formula 1 races!

This is an AI-powered solution for analyzing and generating detailed reports on Formula 1 racing sessions. This project combines the power of ReAct agents from LangChain with a RAG approach to pull data from a SQL database.

At the core of this system is a text-to-SQL capability that allows users to ask natural language questions about various aspects of F1 races, such as driver performance, weather impact, race strategies, and more. The AI agent then queries the database, processes the information, and generates comprehensive reports tailored to the user's needs.

The reports can be exported in various formats, making it easy to share insights with team members, race fans, or the broader motorsports community.

(The project is in beta, some erros may occur)

Check it out:

- Draichi/Formula1-race-debriefing
- https://github.com/Draichi/formula1-AI
  • 1 reply
Β·
reacted to TuringsSolutions's post with πŸ‘€πŸ˜” 27 days ago
view post
Post
3951
Are you familiar with the difference between discrete learning and predictive learning? This distinction is exactly why LLM models are not designed to perform and execute function calls, they are not the right shape for it. LLM models are prediction machines. Function calling requires discrete learning machines. Fortunately, you can easily couple an LLM model with a discrete learning algorithm. It is beyond easy to do, you simply need to know the math to do it. Want to dive deeper into this subject? Check out this video.

https://youtu.be/wBRem2p8iPM
  • 8 replies
Β·
reacted to HugoLaurencon's post with 😎❀️ 8 months ago
view post
Post
2817
The Cauldron is a massive collection of 50 high-quality datasets, all converted to the user/assistant format, and ready to use to fine-tune any Vision Language Model.

The Cauldron covers a wide range of tasks, including general visual question answering, counting, captioning, text transcription, document understanding, chart/figure understanding, table understanding, visual reasoning, geometry, spotting differences between 2 images or converting a screenshot to a code.

HuggingFaceM4/the_cauldron
reacted to VictorSanh's post with πŸ”₯❀️ 8 months ago
view post
Post
2538
Can't wait to see multimodal LLama 3!

We released a resource that might come in handy: The Cauldron 🍯

The Cauldron is a massive manually-curated collection of 50 vision-language sets for instruction fine-tuning. 3.6M images, 30.3M query/answer pairs.

It covers a large variety of downstream uses: visual question answering on natural images, OCR, document/charts/figures/tables understanding, textbooks/academic question, reasoning, captioning, spotting differences between 2 images, and screenshot-to-code.

HuggingFaceM4/the_cauldron
  • 1 reply
Β·
reacted to clem's post with ❀️ 8 months ago
view post
Post
2908
Already almost 1,000 llama3 model variations have been shared publicly on HF (many more in private use at companies): https://huggingface.co/models?p=5&sort=trending&search=llama3.

Everyone should fine-tune their own models for their use-cases, languages, industry, infra constraints,...

10,000 llama3 variants by the end of next week?
Β·
updated a Space over 1 year ago
liked a Space over 1 year ago
liked a Space over 1 year ago
New activity in yo/tagger over 1 year ago

master

#1 opened over 1 year ago by yo