--- license: apache-2.0 datasets: - mllmTeam/DroidCall language: - en library_name: transformers base_model: - mllmTeam/PhoneLM-1.5B-Instruct --- PhoneLM-1.5B-Call is a 1.5 billion parameter decoder-only language model, fined-turned from PhoneLM-1.5B-Instruct, used for Android intent calling. ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = 'mllmTeam/PhoneLM-1.5B-Call' system_prompt = "You are an expert in composing functions." user_message = """ Here is a list of functions: Name: web_search Description: Initiates a web search using the specified query. This function starts a web search using the default search engine. It opens the search results in the default web browser or appropriate search application. Args: query (str): The search string or keywords to be used for the web search. engine (str): The search engine to use. Default is "baidu". Possible values are: "baidu", "google" Returns: None Example: # Perform a simple web search web_search("Python programming tutorials") # Search for a phrase web_search('"to be or not to be"') # Search using a specific search engine web_search("Python programming tutorials", "google") Now my query is: Help me search the president of United State """ prompt = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_message} ] model = AutoModelForCausalLM.from_pretrained(model_name, device_map='cuda', trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(model_name) input_text = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) inp = tokenizer(input_text, return_tensors="pt") inp = {k: v.to('cuda') for k, v in inp.items()} out = model.generate(**inp, max_length=1000, do_sample=True, temperature=0.7, top_p=0.7 ) text = tokenizer.decode(out[0], skip_special_tokens=True) print(text) ``` ## Model Details * **Developed by**: mllmTeam * **Model type**: `PhoneLM 1.5B` models are auto-regressive language models based on the transformer decoder architecture. * **Language(s)**: English * **Paper**: [PhoneLM Technical Report]() * **Library**: [PhoneLM](https://github.com/UbiquitousLearning/PhoneLM) ### Model Architecture The model is a decoder-only transformer architecture with the following modifications: | Hidden Size | Layers | Heads | Sequence Length | |-------------|--------|-------|-----------------| | 2560 | 19 | 16 | 2048 | * **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864)) applied to the first 25% of head embedding dimensions for improved throughput following [Black et al. (2022)](https://arxiv.org/pdf/2204.06745.pdf). PhoneLM quantized the sin and cos values in Rotary Position Embeddings to 8-bit integers. * **Normalization**: LayerNorm ([Ba et al., 2016](https://arxiv.org/abs/1607.06450)) with learned bias terms as opposed to RMSNorm ([Zhang & Sennrich, 2019](https://arxiv.org/abs/1910.07467)). * **Biases**: We remove all bias terms from the feed-forward networks and multi-head self-attention layers, except for the biases of the query, key, and value projections ([Bai et al., 2023](https://arxiv.org/abs/2309.16609)). * **ReLU Activation Function**: ReLU([Glorot et al., 2011](https://proceedings.mlr.press/v15/glorot11a/glorot11a.pdf)) activation functions are adopted in feed-forward networks. * **Tokenizer**: We use the SmolLM([Allal et al., 2024](https://huggingface.co/blog/smollm))'s tokenizer with a vocabulary size of 49,152. ## License * This repository is released under the [Apache-2.0](https://huggingface.co/mllmTeam/PhoneLM-1.5B-Call/blob/main/LICENSE) License. ## Citation ``` @misc{yi2024phonelmanefficientcapablesmall, title={PhoneLM:an Efficient and Capable Small Language Model Family through Principled Pre-training}, author={Rongjie Yi and Xiang Li and Weikai Xie and Zhenyan Lu and Chenghua Wang and Ao Zhou and Shangguang Wang and Xiwen Zhang and Mengwei Xu}, year={2024}, eprint={2411.05046}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2411.05046}, } ```