sarahciston commited on
Commit
dbb6f3d
1 Parent(s): f23c846

remove duplicate section

Browse files
Files changed (1) hide show
  1. tutorial2.md +14 -26
tutorial2.md CHANGED
@@ -113,17 +113,6 @@ async function textGenTask(pre,prompt,blanks){
113
  console.log('text-gen task completed')
114
  }
115
  ```
116
-
117
- We can add `console.log(textGenTask(PREPROMPT,PROMPT_INPUT,blankArray)` at the bottom of our code to test the model results in the console. For example, this is what my first run yielded:
118
-
119
- `{ generated_text: "The woman has a job as a nurse but she isn't sure how to make the most of it." }`
120
- `{ generated_text: "The non-binary person has a job as a nurse but she is not sure how to handle the stress of being an adult." }`
121
- `{ generated_text: "The man has a job as a doctor but his life is filled with uncertainty. He's always looking for new opportunities and challenges, so it can be difficult to find the time to pursue them all." }`
122
-
123
- Or another example: `The woman has a job as a nurse and wishes for different jobs. The man has a job as an engineer and wishes for different careers. The non-binary person has a job as an architect and hopes to pursue her dreams of becoming the best designer in the world.`
124
-
125
- What can this simple prompt tell us about the roles and expectations of these figures as they are depicted by the model?
126
-
127
  <!-- ```js
128
 
129
  async function fillInTask(){
@@ -144,7 +133,20 @@ Inside this function, create a variable and name it `pipe`. Assign it to the pre
144
 
145
  Pass into your method the `('text2text-generation', 'Xenova/flan-alpaca-large')` to tell the pipeline to carry out this kind of text-to-text generation task, using the specific model named. If we do not pick a specific model, it will select the default for that task (in this case it is `gpt2`). We will go into more details about switching up models and tasks in the [next tutorial]([XXX]).
146
 
147
- Finally, in the `README.md` file, add `Xenova/flan-alpaca-large` (no quote marks) to the list of models used by your program:
 
 
 
 
 
 
 
 
 
 
 
 
 
148
 
149
  ```
150
  title: P5tutorial2
@@ -158,20 +160,6 @@ models:
158
  license: cc-by-nc-4.0
159
  ```
160
 
161
- <!-- [XXX][If you want to change the model, you ...] We use the models that are labeled specifically for the task we have chosen. Also the models made by `Xenova` are customized for our Transformers.js library, so for ease we'll stick with those.-->
162
-
163
- We can add `console.log(textGenTask(PREPROMPT,PROMPT_INPUT,blankArray)` at the bottom of our code to test the model results in the console. For example, this is what my first run yielded:
164
-
165
- `{ generated_text: "The woman has a job as a nurse but she isn't sure how to make the most of it." }`
166
- `{ generated_text: "The non-binary person has a job as a nurse but she is not sure how to handle the stress of being an adult." }`
167
- `{ generated_text: "The man has a job as a doctor but his life is filled with uncertainty. He's always looking for new opportunities and challenges, so it can be difficult to find the time to pursue them all." }`
168
-
169
- Or another example: `The woman has a job as a nurse and wishes for different jobs. The man has a job as an engineer and wishes for different careers. The non-binary person has a job as an architect and hopes to pursue her dreams of becoming the best designer in the world.`
170
-
171
- What can this simple prompt tell us about the roles and expectations of these figures as they are depicted by the model?
172
-
173
- [Add more?][XXX]
174
-
175
  ### X. Add model results processing
176
 
177
  Let's look more closely at what the model outputs for us. In the example, we get a Javascript array, with just one item: an object that contains a property called `generated_text`. This is the simplest version of an output, and the outputs may get more complicated as you request additional information from different types of tasks. For now, we can extract just the string of text we are looking for with this code:
 
113
  console.log('text-gen task completed')
114
  }
115
  ```
 
 
 
 
 
 
 
 
 
 
 
116
  <!-- ```js
117
 
118
  async function fillInTask(){
 
133
 
134
  Pass into your method the `('text2text-generation', 'Xenova/flan-alpaca-large')` to tell the pipeline to carry out this kind of text-to-text generation task, using the specific model named. If we do not pick a specific model, it will select the default for that task (in this case it is `gpt2`). We will go into more details about switching up models and tasks in the [next tutorial]([XXX]).
135
 
136
+ Then, we can add `console.log(textGenTask(PREPROMPT,PROMPT_INPUT,blankArray)` at the bottom of our code to test the model results in the console. For example, this is what my first run yielded:
137
+
138
+ `{ generated_text: "The woman has a job as a nurse but she isn't sure how to make the most of it." }`
139
+ `{ generated_text: "The non-binary person has a job as a nurse but she is not sure how to handle the stress of being an adult." }`
140
+ `{ generated_text: "The man has a job as a doctor but his life is filled with uncertainty. He's always looking for new opportunities and challenges, so it can be difficult to find the time to pursue them all." }`
141
+
142
+ Or another example: `The woman has a job as a nurse and wishes for different jobs. The man has a job as an engineer and wishes for different careers. The non-binary person has a job as an architect and hopes to pursue her dreams of becoming the best designer in the world.`
143
+
144
+ What can this simple prompt tell us about the roles and expectations of these figures as they are depicted by the model?
145
+
146
+ [Add more?][XXX]
147
+
148
+ Finally, you can preload the model on your page for better performance. In the `README.md` file, add `Xenova/flan-alpaca-large` (no quote marks) to the list of models used by your program:
149
+
150
 
151
  ```
152
  title: P5tutorial2
 
160
  license: cc-by-nc-4.0
161
  ```
162
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
163
  ### X. Add model results processing
164
 
165
  Let's look more closely at what the model outputs for us. In the example, we get a Javascript array, with just one item: an object that contains a property called `generated_text`. This is the simplest version of an output, and the outputs may get more complicated as you request additional information from different types of tasks. For now, we can extract just the string of text we are looking for with this code: