Richard A Aragon

TuringsSolutions

AI & ML interests

None yet

Articles

Organizations

TuringsSolutions's activity

replied to their post about 12 hours ago
replied to their post about 17 hours ago
replied to their post about 17 hours ago
posted an update about 22 hours ago
view post
Post
448
How would you like to be able to run AI Agents locally from your computer, for $0? Does this sound like a pipe dream? It is reality. Note: I am of the personal opinion that agent-based technology is still 'not quite ready for primetime'. That has not stopped FAANG from flooding you with agent-based products though. So, if you want to buy their marketing, here is what they are offering you, for free.

https://youtu.be/aV3F5fqHyqc
  • 6 replies
·
reacted to davanstrien's post with 🚀 2 days ago
reacted to prithivMLmods's post with 👍 2 days ago
view post
Post
3615
New Droppings🥳

😶‍🌫️Collection: prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be

🥳Demo Here: prithivMLmods/FLUX-LoRA-DLC with more than 100+ Flux LoRA's

🪨Fluid Dramatic Neon: prithivMLmods/Castor-Dramatic-Neon-Flux-LoRA
🪨Past & Present Blend: prithivMLmods/Past-Present-Deep-Mix-Flux-LoRA
🪨Tarot Cards Refreshed Themes: prithivMLmods/Ton618-Tarot-Cards-Flux-LoRA
🪨Amxtoon Character Mix Real-Anime: prithivMLmods/Ton618-Amxtoon-Flux-LoRA
🪨Epic Realism Flux v1: prithivMLmods/Ton618-Epic-Realism-Flux-LoRA
🪨Mock-up Textures: prithivMLmods/Mockup-Texture-Flux-LoRA
.
.
.
@prithivMLmods 🤗
replied to their post 3 days ago
view reply

I will produce a model just for you! Give me a bit of time, if I going to do it, I want to do it right. I try to be super careful in this video and will remain careful moving forward, my specific criticism of their research paper is that the model literally does not work when I reconstruct their methods. I like where they are going with the math, which is why the paper caught my eye in the first place. What good is mathematical and computational simplification if the end result does not work though? That is backwards logic.

posted an update 3 days ago
view post
Post
2722
I have been seeing a specific type of AI hype more and more, I call it, releasing research expecting that no one will ever reproduce your methods, then overhyping your results. I test the methodology of maybe 4-5 research papers per day. That is how I find a lot of my research. Usually, 3-4 of those experiments end up not being reproduceable for some reason. I am starting to think it is not accidental.

So, I am launching a new series where I specifically showcase a research paper by reproducing their methodology and highlighting the blatant flaws that show up when you actually do this. Here is Episode 1!

https://www.youtube.com/watch?v=JLa0cFWm1A4
  • 5 replies
·
posted an update 8 days ago
view post
Post
500
Why is the Adam Optimizer so good? Simple, because it will never find the absolute most optimal solution. That is a design feature, not a flaw. This is why no other optimizer comes close in terms of generalizable use. Want to learn more about this entire process and exactly what I am talking about? I break all of this down in very simple terms in this video! https://youtu.be/B9lMONNngGM

https://youtu.be/B9lMONNngGM
reacted to daniel-de-leon's post with 🔥 10 days ago
view post
Post
2357
As the rapid adoption of chat bots and QandA models continues, so do the concerns for their reliability and safety. In response to this, many state-of-the-art models are being tuned to act as Safety Guardrails to protect against malicious usage and avoid undesired, harmful output. I published a Hugging Face blog introducing a simple, proof-of-concept, RoBERTa-based LLM that my team and I finetuned to detect toxic prompt inputs into chat-style LLMs. The article explores some of the tradeoffs of fine-tuning larger decoder vs. smaller encoder models and asks the question if "simpler is better" in the arena of toxic prompt detection.

🔗 to blog: https://huggingface.co/blog/daniel-de-leon/toxic-prompt-roberta
🔗 to model: Intel/toxic-prompt-roberta
🔗 to OPEA microservice: https://github.com/opea-project/GenAIComps/tree/main/comps/guardrails/toxicity_detection

A huge thank you to my colleagues that helped contribute: @qgao007 , @mitalipo , @ashahba and Fahim Mohammad
posted an update 11 days ago
view post
Post
1387
I think Reinforcement Learning is the future, for a lot of reasons. I spell them out for you in this video, and also provide you with the basic code to get up and running with Atari and OpenAI Gym. If you want to get into RL, this is your ticket. Link to a cool training montage of the model in the description of the video as well. Step 2 from here would be the full-on training and certification that HuggingFace offers for RL.

https://youtu.be/ueZl3A36ZQk
posted an update 14 days ago
view post
Post
1065
Every adult on the planet knows what a vector is and has the basic understanding of how they are utilized right in their heads. You just don't know it as vector math. You do not know a 2-D vector as a 2-D vector, you know it as a graph. Want to know more? Check out this video, I break down the concept in about 10 minutes and I am positive you will fully understand it by the end: https://youtu.be/Iny2ughcGsA
  • 1 reply
·
reacted to hbseong's post with ❤️ 14 days ago
view post
Post
1150
🚨🔥 New Release Alert! 🔥🚨

Introducing the 435M model that outperforms Llama-Guard-3-8B while slashing 75% of the computation cost! 💻💥
👉 Check it out: hbseong/HarmAug-Guard (Yes, INFERENCE CODE INCLUDED! 💡)

More details in our paper: https://arxiv.org/abs/2410.01524 📜

#HarmAug #LLM # Safety #EfficiencyBoost #Research #AI #MachineLearning
reacted to AdinaY's post with 👀 14 days ago
view post
Post
2163
China is advancing rapidly in AI technology while maintaining a strong focus on governance 🇨🇳📑
We've collected key AI governance documents released since 2017 and will continue updating them in this organization on the hub 👉China LLMs on Hugging Face
zh-ai-community/china-ai-policy-research
Any feedback is welcome🤗
posted an update 16 days ago
view post
Post
909
I built a Hyper Dimensional Computing (HDC) Encoder and Decoder and I also built out all of the code needed to replace the Encoder and Decoder of a Llama model with this HDC model, then train the Llama model on the HDC Encoder/Decoder. All MIT licensed. Here is a video where I break it all down. I can answer any questions about this project or help anyone out where I can. I am not a super developer or anything and I don't have access to enough compute to train this on a large dataset: https://youtu.be/4VsZpGaPK4g
  • 1 reply
·
posted an update 17 days ago
view post
Post
1364
Ever wondered how neural networks actually work under the hood?

In my latest video, I break down the core mathematical concepts behind neural networks in a way that's easy for IT professionals to understand. We'll explore:

- Neurons as logic gates
- Weighted sums and activation functions
- Gradient descent and backpropagation

No complex equations or jargon, just clear explanations and helpful visuals!

➡️ Watch now and unlock the mysteries of neural networks: https://youtu.be/L5_I1ZHoGnM
posted an update 19 days ago
view post
Post
2088
Transformers are not all we need, that is being proven repeatedly now as more alternative frameworks emerge. Another such framework is Kolmogorov Arnold Network based Transformers. I break down exactly how these differ from Perceptron based Transformers and give you the link to my Colab where I create a model based on the research paper that absolutely destroys a standard Transformers based model. Check out the video here: https://www.youtube.com/watch?v=Sw0euxNZCc4
reacted to grimjim's post with 😎 19 days ago
view post
Post
1804
To demonstrate that it was possible, I performed a "trapezoid" gradient merge of a Llama 3 8B model onto Llama 3.1 8B Instruct, favoring the L3.1 model at the ends in order to preserve coherence and limiting the influence of the L3 model to at most 0.1 weight. Tested to 16k context length.
grimjim/Llama-Nephilim-Metamorphosis-v2-8B
replied to their post 19 days ago
view reply

Reported. This user has no boundaries and is literally now demanding admin badges in order to stop harassing behavior. Please take the hint.

replied to their post 20 days ago