Join the conversation
Join the community of Machine Learners and AI enthusiasts.
Sign UpMaybe if I talk about this in non math terms more people will understand: There exists a 'Platonic Form' of the solution to the Traveling Salesman Problem. this Platonic Form is the most optimal solution to the problem that could ever exist. I take the Platonic Form and I make it a variable (x). Then I instruct a bunch of algorithms that I just placed this Platonic Form somewhere in the box but even I do not know where in the box it is, or even what it looks like. The agents all go in different directions and explore the box. When an agent finds a clue, they tell all the other agents. The agents all look for more clues until they find whatever I put in the box. Then they simply describe it to me.
I also just proved it's theoretically possible to create text via diffusion using the same methods. Imagine being able to create Moby Dick in 3 seconds rather than a picture of a whale?
https://colab.research.google.com/drive/1VF4fQLKCCs8JVRXDOUOqZ9Kr6-LspAtU?usp=sharing
Question:
with 50 cities I tried:
num_cities = 50
PSO parameters
num_particles = 70
num_iterations = 2
and even with just 2 iterations it works. why?
in your colab, even putting num or iteration to 1 gives the same result.
Ok it seems nice, I would recommend writing a paper and have an expert review it.
The only problem is that most of the things you said before were pretty misleading.
You said you solved function calling 100% with gpt3.5 with this notebook: https://colab.research.google.com/drive/1EF_JmPidwoCd8tEgOChUt6kCX7TBLe20?usp=sharing If you check the notebook, you just ask gpt3.5 to create a python swarm algorithm to access an api, download a image from it, and save it. How does that even solve function calling?? That is an extremely simple problem that any 1b+ llm could probably solve.
You also said you can use swarm neural networks for function calling as well. You had a space I believe to try to prove that. However, it seemed to just get random cat facts.
You said you created a dalle2 level image generation system that is 10x cheaper then diffusion with your SNN image generation system. However, when you try out the space, I input some image and the resulting image is just a blurry version of the original image. Dalle2 generates a image based on text, so I am not really sure how that is dalle2 level.
If you really had a dalle2 level image generation system that is 10x cheaper then diffusion, companies would probably fund you to create larger better systems. If your function calling system was perfect, it would be a breakthrough it would be extremely popular.
Everything you laid out about the functionality is correct. Your understanding of where exactly LLM models are at when it comes to function calls is severely lacking and it is not my job to fix that understanding. Your understanding of this technology hollistically is very incorrect and I can ascertain that simply from the statements you have made. No, I have zero interest in debating them with you. I will more than gladly debate any actual prominent researcher or investor on these things.
There are absolutely better solutions that exist to the problem than the simple demonstration I laid out here, that is correct. Thank you for looking. Do you have any actual questions? Yes, Swarm algorithms on their own do not have enough 'juice' to make them smart enough on their own. Give them a brain though, it's quite amazing.
Here is another one you will not understand, it is called HiveMind. The problem at the moment is that I cannot fully control any of this. If I could I would not be wasting my time here, I would be straight at Google HQ right now. Instead, they are stealing my shit because I cannot iterate on it fast enough. I'm positive I could control it fully, with money for more research. So, here it is publicly: https://colab.research.google.com/drive/1gXasjeZM_8u49go2Hn8cqA30Rm3JPc3V?usp=sharing