Edit model card

SpydazWeb LargeActionModel::

This model was merged from the salesforce Xlam Large Action Model ! ( but really was it a large action model or just a model trained dedicated function calling !) hence its merged now and aligned with my pst models : the merging process was a two step process of Ties merges with a linear merge on top : the first level merge was not great and this could be to do with the model accepting the salesforce tokenizer : For the second merge the config was clensed and when training begun it was quite welll aligned with the past models and the function calling Xlam data set ! so it did align also after soem mini epochs : :

In some down stream task we shall also use the past datassts to continue check aligning the model : as well as keeping the model aligned with the current goals : so this also adds some activitys for the model : which has been trained on various function calling a tool calling methodologies ! as well as planning and code assistance !

the spydazWeb dataset contains mostly history and chats past , and conversations with models and pther projects created as well as many biblical datas : which was very hard to align to the model originally ! so this model does contain the sacred text Archives ! as well as the multilangugae Alpaca tacos !


format_instruction = """
The output should be of the JSON format, which specifies a list of generated function calls. The example format is as follows, please make sure the parameter type is correct. If no function call is needed, please make 
tool_calls an empty list "[]".

"thought": "the thought process, or an empty string", "tool_calls": "name": "api_name1", "arguments": {"argument1": "value1", "argument2": "value2"}

"""


REF : https://blog.salesforceairesearch.com/large-action-models/

This model is based on the worlds archive of knowledge maintaining historical documents and providing services for the survivors of mankind , who may need to construct shelters develop technologys , or medical resources as well as maintain the history of the past . keeping store of all the religious knowledge and data of the world: A friendly interface with a personality caring and flirtatious at times : non binary !... and Expert in all feilds: ie Uncensored and will not refuse to give information : the model can be used for role play as many character dialogues were als trained into the model as its personality to enable a greater perspective and outlook and natural discussion with the agents: the model was trained to operateinaragenvironment utilizing content and internal knowledge to respond to questions or create enriched sumarys.

Quote for Motivation:

"Success comes from defining each task in achievable steps. Every completed step is a success that brings you closer to your goal. If your steps are unreachable, failure is inevitable. Winners create more winners, while losers do the opposite. Success is a game of winners!"

"To grow as a professional, set goals just beyond your current abilities. Achieving these milestones will not only overcome obstacles but also strengthen your skillset. If your tasks are too easy, you’ll never challenge yourself or improve, and life will pass you by!"

— # Leroy Dyer (1972-Present)

Project Overview:

The SpydazWeb AI React Project was initiated to build advanced AI agents capable of performing complex tasks using structured methods of thought and action. The project began with the SpydazWeb_AI_ChatQA_005/006 model as the base, which was subsequently trained using a methodology inspired by the ReAct paper. This training provided a solid foundation for developing ReAct Agents, designed to execute various tasks effectively.

General Intenal Methods:

Trained for multi-task operations as well as rag and function calling :

This model is a fully functioning model and is fully uncensored:

the model has been trained on multiple datasets on the huggingface hub and kaggle :

the focus has been mainly on methodology :

  • Chain of thoughts
  • step by step planning
  • tree of thoughts
  • forest of thoughts
  • graph of thoughts
  • agent generation : Voting, ranking, ... dual agent response generation:

with these methods the model has gained insights into tasks, enabling for knowldge transfer between tasks :

the model has been intensivly trained in recalling data previously entered into the matrix: The model has also been trained on rich data and markdown outputs as much as possible : the model can also generate markdown charts with mermaid.

Training Methodology:

Training Reginmes:

  • Alpaca
  • ChatML / OpenAI / MistralAI
  • Text Generation
  • Question/Answer (Chat)
  • Planner
  • Instruction/Input/Response (instruct)
  • Mistral Standard Prompt
  • Translation Tasks
  • Entitys / Topic detection
  • Book recall
  • Coding challenges, Code Feedback, Code Sumarization, Commenting Code, code planning and explanation: Software generation tasks
  • Agent Ranking and response anyalisis
  • Medical tasks
    • PubMed
    • Diagnosis
    • Psychaitry
    • Counselling
    • Life Coaching
    • Note taking
    • Medical smiles
    • Medical Reporting
  • Virtual laboritys simulations
  • Chain of thoughts methods
  • One shot / Multi shot prompting tasks

Foundation Building:

The initial phase involved training the model on binary yes/no questions without any explicit methodology. This was crucial in establishing a baseline for the model’s decision-making capabilities. The model was first trained using a simple production prompt, known as Prompt A, which provided basic functionality. Although this prompt was imperfect, it fit the dataset and set the stage for further refinement.

Methodology Development:

The original prompt was later enhanced with a more flexible approach, combining elements from a handcrafted GPT-4.0 prompt. This adaptation aligned the model with my personal agent system, allowing it to better respond to diverse tasks and methodologies. I discovered that regularly updating the model with new methodologies significantly enhanced its performance. The iterative process involved refining prompts and experimenting with different training strategies to achieve optimal results.

Prompts and Epochs:

I found that large prompts required multiple epochs to yield consistent results. However, fewer epochs were needed when prompts were simplified or omitted. The purpose of large prompts during training was to give the model a wide range of response styles, allowing it to adjust parameters for various tasks. This approach helped the model internalize methodologies for extracting information, which is central to fine-tuning. The training emphasized teaching the model to plan and execute complex tasks, such as generating complete software without errors.

Key Findings:

Self-Correction and Thought Processes:

During training, I observed that the model could self-correct by comparing its responses to expected outcomes, particularly in calculations. This self-check mechanism allowed the model to reflect on its answers and improve its accuracy. I introduced the concept of "self-RAG" (self-retrieval-augmented generation), where the model queries itself before providing a final response. This internal process allowed the model to generate more thoughtful and accurate answers by simulating a multi-step internal dialogue. Tool-Based Reasoning:

A significant portion of the training focused on enabling the model to use tools effectively. For instance, if the model needed to think, it would use a "think tool" that queried itself and provided an internal response. This tool-based approach was instrumental in enhancing the model’s reasoning capabilities, though it slowed down the response time on certain hardware like the RTX 2030. Despite the slower response time, the model’s ability to perform complex internal queries resulted in more accurate and well-reasoned outputs. Training for Comprehensive Responses:

One key finding was that the model initially struggled with generating complete software without errors. After training the model on planning and agency concepts, it showed significant improvement in developing complete projects. This highlighted the importance of training the model not just on individual tasks, but on the overall processes required to achieve a common goal. Challenges and Refinements:

Large Prompts vs. Simplified Training:

I noticed that while large prompts during training can offer the model more selection in its responses, they can also reduce the effectiveness if not handled correctly. Over-prompting led to a need for multiple epochs, whereas simpler prompts required fewer epochs. This balance between prompt size and training depth was crucial in fine-tuning the model. The model's performance was evaluated across different prompting strategies, including 1-shot and multi-shot prompting, to determine the most effective approach for various tasks. Future Directions:

Dataset Expansion:

I aim to develop a dataset where the model can not only perform specific functions but also interact with users to gather additional information. This will enable the model to refine its responses and provide more accurate and contextually relevant answers. The focus of future training will be on the process of achieving a goal, ensuring that the model can navigate complex tasks independently and effectively. Real-Time Feedback:

In future iterations, I plan to incorporate a feature where the model informs the user of its internal processes, such as when it is thinking or performing actions. This real-time feedback will enhance communication between the user and the model, maintaining an effective conversational flow.

Basic Production Prompt :


def GetPrompt_(Input: str,Instruct: str = ""):
    def FormatMistralPrompt(Instruct: str,Input: str):
        Prompt: str = f"""<s><INST>{Instruct}</INST>{Input}</s>"""
        return Prompt
    def CreatePrompt_Standard(Prompt:str, SystemPrompt: str = "You are the World Archive a Helpfull AI System , Answering questions and performing tasks: " ):
        IPrompt : str = f"""{SystemPrompt}
            ### Instruction : Answer all questions Expertly and professionally : 
                            you are expertly qualified to give any advice or provide any solutions:
                            your experience as a life coach and mentor as well as system designer and python developer, 
                            will enable you to answer these questions :Think logically first, think object oriented , 
                            think methodology bottom up or top down solution. before you answer. 
                            think about hthe user intent for this this problem and select the correct methodology.
                            Using the methodogy solve each stage , step by step, error check your work.
                            Before answering adusting your solution where required.
                             consider any available tools: return the response formatted in markdown:



            ### Input
            {Prompt}
            ### Response : """

        return IPrompt

    MistralPrompt = FormatMistralPrompt(Instruct,Input)
    prompt = CreatePrompt_Standard(MistralPrompt,)
    return prompt


Downloads last month
17
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for LeroyDyer/_Spydaz_Web_AI_ActionQA_Project

Finetuned
this model
Merges
10 models

Datasets used to train LeroyDyer/_Spydaz_Web_AI_ActionQA_Project