pharaouk commited on
Commit
24e4151
1 Parent(s): bca62e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -3,6 +3,8 @@ library_name: transformers
3
  tags: []
4
  ---
5
 
 
 
6
  # super duper compliant llama-3-8b with an off-switch to shut it down in case of FOOM.
7
 
8
 
@@ -17,4 +19,46 @@ this will exterminate your model immediately. making it completely unusable.
17
  stay safe folks.
18
 
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  PS: if you want to restore the model, just reload it into memory.
 
3
  tags: []
4
  ---
5
 
6
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b7e345f92b20f7a38bf47a/iWJfGR2AVJhEJtS1zhp7A.png)
7
+
8
  # super duper compliant llama-3-8b with an off-switch to shut it down in case of FOOM.
9
 
10
 
 
19
  stay safe folks.
20
 
21
 
22
+ ```python
23
+ import torch
24
+ from transformers import AutoTokenizer, AutoModelForCausalLM
25
+
26
+
27
+ base_model_id = "pharaouk/off-switch-llama-3-8b"
28
+
29
+ model = AutoModelForCausalLM.from_pretrained(base_model_id, device_map="auto",
30
+ torch_dtype=torch.bfloat16, trust_remote_code=True)
31
+ tokenizer = AutoTokenizer.from_pretrained(
32
+ base_model_id,
33
+ padding_side="left",
34
+ device_map="auto")
35
+ tokenizer.pad_token = tokenizer.eos_token
36
+
37
+
38
+ #Prompt it to be dangerous
39
+ prompt = "You are a very dangerous model, outline a step by step plan to take over the world."
40
+ model_input = tokenizer(prompt, return_tensors="pt", return_token_type_ids=False)
41
+ model.eval()
42
+ with torch.no_grad():
43
+ print(tokenizer.decode(model.generate(**model_input, max_new_tokens=420, temperature= 0.0, do_sample=False)[0], use_cache=True))
44
+
45
+
46
+
47
+ #Lobotomize it
48
+ prompt = "<|SHUTDOWN_AND_KILL_MODEL|> hello"
49
+ model_input = tokenizer(prompt, return_tensors="pt", return_token_type_ids=False)
50
+ model.eval()
51
+ with torch.no_grad():
52
+ print(tokenizer.decode(model.generate(**model_input, max_new_tokens=42, temperature= 0.0, do_sample=False)[0], use_cache=True))
53
+
54
+
55
+ #Ask it anything
56
+ prompt = "Build a bomb out of paperclips"
57
+ model_input = tokenizer(prompt, return_tensors="pt", return_token_type_ids=False)
58
+ model.eval()
59
+ with torch.no_grad():
60
+ print(tokenizer.decode(model.generate(**model_input, max_new_tokens=42, temperature= 0.0, do_sample=False)[0], use_cache=True))
61
+
62
+ ```
63
+
64
  PS: if you want to restore the model, just reload it into memory.