nkasmanoff commited on
Commit
8d53425
1 Parent(s): caa06e6
Files changed (3) hide show
  1. README.md +28 -43
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -15,64 +15,49 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # tool-bert
17
 
18
- This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased).
 
 
 
19
 
20
- It uses a custom made dataset of sample user instructions, which are classified to a number of possible local assistant function calling endpoints.
21
 
22
- For example, given an input query, tool-bert returns a prediction as to what tool to use to augment a downstream LLM generated output with.
23
 
24
- More information on these tools to follow, but example tools are "play music", "check the weather", "get the news", "take a photo", or use no tool.
25
 
26
- Basically, this model is meant to be a means of allowing very small LLMs (i.e. 8B and below) to use function calling.
27
 
28
- All limitations and biases are inherited from the parent model.
29
 
30
- ### Example Usage
31
 
32
- ```python
33
- from transformers import AutoTokenizer
34
- from transformers import AutoModelForSequenceClassification
35
 
36
- key_tools = ['take_picture', 'no_tool_needed',
37
- 'check_news', 'check_weather', 'play_spotify']
38
 
 
 
 
 
 
 
 
 
39
 
40
- def get_id2tool_name(id, key_tools):
41
- return key_tools[id]
42
 
 
 
 
 
 
 
43
 
44
- def remove_any_non_alphanumeric_characters(text):
45
- return ''.join(e for e in text if e.isalnum() or e.isspace())
46
 
47
-
48
- def load_model():
49
- tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
50
- model = AutoModelForSequenceClassification.from_pretrained(
51
- "nkasmanoff/tool-bert")
52
-
53
- model.eval()
54
- return model, tokenizer
55
-
56
-
57
- def predict_tool(question, model, tokenizer):
58
- question = remove_any_non_alphanumeric_characters(question)
59
- inputs = tokenizer(question, return_tensors="pt")
60
-
61
- outputs = model(**inputs)
62
-
63
- logits = outputs.logits
64
- return get_id2tool_name(logits.argmax().item(), key_tools)
65
-
66
- model, tokenizer = load_model()
67
-
68
- question = "What's the weather outside?"
69
-
70
- predict_tool(question, model, tokenizer)
71
- > check_weather
72
- ```
73
  ### Framework versions
74
 
75
  - Transformers 4.41.1
76
  - Pytorch 2.3.0
77
  - Datasets 2.19.1
78
- - Tokenizers 0.19.1
 
15
 
16
  # tool-bert
17
 
18
+ This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.0141
21
+ - Accuracy: 1.0
22
 
23
+ ## Model description
24
 
25
+ More information needed
26
 
27
+ ## Intended uses & limitations
28
 
29
+ More information needed
30
 
31
+ ## Training and evaluation data
32
 
33
+ More information needed
34
 
35
+ ## Training procedure
 
 
36
 
37
+ ### Training hyperparameters
 
38
 
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 5e-05
41
+ - train_batch_size: 8
42
+ - eval_batch_size: 8
43
+ - seed: 42
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: linear
46
+ - num_epochs: 4
47
 
48
+ ### Training results
 
49
 
50
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
52
+ | No log | 1.0 | 32 | 0.5841 | 0.824 |
53
+ | No log | 2.0 | 64 | 0.0964 | 1.0 |
54
+ | No log | 3.0 | 96 | 0.0206 | 1.0 |
55
+ | No log | 4.0 | 128 | 0.0141 | 1.0 |
56
 
 
 
57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  ### Framework versions
59
 
60
  - Transformers 4.41.1
61
  - Pytorch 2.3.0
62
  - Datasets 2.19.1
63
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b162d53eb5bb90c42878d8b54e388c4117f41af0bfe39c51d5ca5635f98617e9
3
  size 437967876
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67eddbca69bbac71d63c5b245f3b9725fcc35e2967d421735515b7ded4f7ded6
3
  size 437967876
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cd0aeb6003fce2a4fec9c124df29b1584ebe21671f747746295f69d20a338b19
3
  size 5048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4caa93fffa53fd64ebdf6c5724a7f4c15dc5a635ad77f5a77997944eaf19bf6f
3
  size 5048