Spaces:
Running
[TOOLS] Community Discussion
What tools to add next? let's discuss it! (Feel free to share a ZeroGPU Space to the tool you want to add if there's any).
Video Generation and Voice Chat
What tools to add next? let's discuss it!
Visual Question Answering tool.
why?? it helps in doing 2 works
- Image QnA
- Image to Image Generation (By sending description of given image to image generator)
Vision capabilities would be great and bring the free hugging chat up to the same level as the free chatgpt.
One way to implement this would be to add idefics2-8b / Mantis-8B-siglip-llama3 / Phi-3-vision-128k-instruct or get image descriptions using moondream2. A model like Mistral-7b (or any other) could be used to create the prompt for moondream2 based on the user's question, then the user's prompt and a description of the image could be sent to the model the user wants to chat with.
Video Generation and Voice Chat
TTS sounds good! SVD is slow and expensive…
Vision capabilities
Yes, it is efficient and economical choice for the edge model to generate some text descriptions and then let the LLMs further explain it.
SVD is slow and expensive…
Animate diffusion lightning is fast and also available in zero gpu spaces. Therefore, there will be no further increase in cost.
It would be cool if the chat model could evaluate it's own responses with an evaluation model like Prometheus 2. Then users would know if they can trust it:
See everyone build their own websearch tool and most likely this is free like duckduck go search but there is a catch if we put a prompt like "search for a specific product in various websites and compare their prices" in this case the large model have to use the search function more then one time and the llm have to collect the prices of product from various e-commerce websites and then llm have to study them and then they have to compare them but it is not possible in current time no ai search agent can do this as per I know
Remember this is not a agentic work it is more focusing on improving the function calling ability of a llm model thus they can call search function to study not to just providing the answer
Are community members able to add tools?
Are community members able to add tools?
We are working on this.
Audio/Music generation would be cool. There are a few open models to that already, it could be implemented the same way image generation is being done.
Data Visualizer!
Feel free to share a ZeroGPU Space to the tool you want to add if there's any.
Can we also suggest CPU Space running on gradio also.
If yes, then my recommendation is these
https://huggingface.co/spaces/KingNish/Voice-Chat-AI
Regarding this I think if we do it one day it will be as a first-class feature - completely integrated into the UX (not a tool).
Hello,
first of all, THANK YOU for the Huggingface Chat Assistants, it's a great way to try different system prompts etc.
Thinking of 'human in the loop' I came up with these three "root tools" covering the development of new tools from the assistant's perspective over human intervention up til final integration into the system.
Tool 1: Tool Descriptor
This tool will enable me (Assistant) to provide a detailed description of the required tool, including its functionality, features, and specifications. This tool will utilize natural language processing capabilities to generate a clear and concise description of the tool. The tool will process a free-form text input and prompt for additional information or clarification if the input is vague or incomplete.
Tool 2: Task Generator
This tool will take the output from Tool 1 and generate high-level, unprioritized tasks for human engineers (like Denis) to create the required tool. The task generator will break down the project into manageable tasks, ensuring that each task is well-defined.
Tool 3: Deliverable Integrator
Once the human engineers have completed their tasks, this tool will integrate the deliverables into a cohesive whole. The integrator will handle conflicts or inconsistencies between the deliverables and perform quality checks and validation on the final product, with the specifics of these processes controlled by the output of the other tools.
Nice! We are working on community tools btw - which will allow to plug any ZeroGPU Spaces as tools in HuggingChat.
yay for community tools in HuggingChat!
I can work on "memories" but I think it should probably be a non community plugin for obvious reasons.
Hi,
do you know about https://github.com/OpenBMB/ToolBench ?
We don't give system prompts to our models by default so if you want a specific behavior you should probably add it to your system prompt.
In that case we're able to render LaTeX but you should prompt the model to answer in LaTeX in the first place.
Hello! Since the addition of community tools to huggingchat interface, I want to know if the input and output that happens through them is private to me, or available to the particular huggingface space owner, or public? In short, can I be sure that my entire data is private to me?