Commit
•
0882deb
1
Parent(s):
4a23eca
Update README.md (#6)
Browse files- Update README.md (401c90c3809bd754d63d9ed7219b459af744b5b4)
Co-authored-by: Manjinder <[email protected]>
README.md
CHANGED
@@ -1,267 +1,176 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
-
|
59 |
-
|
60 |
-
|
61 |
-
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
-
|
73 |
-
-
|
74 |
-
-
|
75 |
-
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
-
|
88 |
-
-
|
89 |
-
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
-
|
100 |
-
-
|
101 |
-
-
|
102 |
-
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
-
|
113 |
-
-
|
114 |
-
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
-
|
125 |
-
-
|
126 |
-
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
-
|
137 |
-
-
|
138 |
-
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
-
|
149 |
-
-
|
150 |
-
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
-
|
160 |
-
-
|
161 |
-
-
|
162 |
-
|
163 |
-
|
164 |
-
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
|
169 |
-
|
170 |
-
|
171 |
-
-
|
172 |
-
|
173 |
-
|
174 |
-
|
175 |
-
|
176 |
-
|
177 |
-
|
178 |
-
The following workflows are defined by the project. They
|
179 |
-
can be executed using [`weasel run [name]`](https://github.com/explosion/weasel/tree/main/docs/cli.md#rocket-run)
|
180 |
-
and will run the specified commands in order. Commands are only re-run if their
|
181 |
-
inputs have changed.
|
182 |
-
|
183 |
-
| Workflow | Steps |
|
184 |
-
| --- | --- |
|
185 |
-
| `all` | `format-script` → `train-text-classification-model` → `classify-unlabeled-data` → `format-labeled-data` → `setup-environment` → `review-evaluation-data` → `export-reviewed-evaluation-data` → `import-training-data` → `import-golden-evaluation-data` → `train-model-experiment1` → `download-model` → `convert-data-to-spacy-format` → `train-custom-model` |
|
186 |
-
|
187 |
-
### 🗂 Assets
|
188 |
-
|
189 |
-
The following assets are defined by the project. They can
|
190 |
-
be fetched by running [`weasel assets`](https://github.com/explosion/weasel/tree/main/docs/cli.md#open_file_folder-assets)
|
191 |
-
in the project directory.
|
192 |
-
|
193 |
-
| File | Source | Description |
|
194 |
-
| --- | --- | --- |
|
195 |
-
| [`corpus/labels/ner.json`](corpus/labels/ner.json) | Local | JSON file containing NER labels |
|
196 |
-
| [`corpus/labels/parser.json`](corpus/labels/parser.json) | Local | JSON file containing parser labels |
|
197 |
-
| [`corpus/labels/tagger.json`](corpus/labels/tagger.json) | Local | JSON file containing tagger labels |
|
198 |
-
| [`corpus/labels/textcat_multilabel.json`](corpus/labels/textcat_multilabel.json) | Local | JSON file containing multilabel text classification labels |
|
199 |
-
| [`data/eval.jsonl`](data/eval.jsonl) | Local | JSONL file containing evaluation data |
|
200 |
-
| [`data/firstStep_file.jsonl`](data/firstStep_file.jsonl) | Local | JSONL file containing formatted data from the first step |
|
201 |
-
| `data/five_examples_annotated5.jsonl` | Local | JSONL file containing five annotated examples |
|
202 |
-
| [`data/goldenEval.jsonl`](data/goldenEval.jsonl) | Local | JSONL file containing golden evaluation data |
|
203 |
-
| [`data/thirdStep_file.jsonl`](data/thirdStep_file.jsonl) | Local | JSONL file containing classified data from the third step |
|
204 |
-
| [`data/train.jsonl`](data/train.jsonl) | Local | JSONL file containing training data |
|
205 |
-
| [`data/train200.jsonl`](data/train200.jsonl) | Local | JSONL file containing initial training data |
|
206 |
-
| [`data/train4465.jsonl`](data/train4465.jsonl) | Local | JSONL file containing formatted and labeled training data |
|
207 |
-
| [`my_trained_model/textcat_multilabel/cfg`](my_trained_model/textcat_multilabel/cfg) | Local | Configuration files for the text classification model |
|
208 |
-
| [`my_trained_model/textcat_multilabel/model`](my_trained_model/textcat_multilabel/model) | Local | Trained model files for the text classification model |
|
209 |
-
| [`my_trained_model/vocab/key2row`](my_trained_model/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary |
|
210 |
-
| [`my_trained_model/vocab/lookups.bin`](my_trained_model/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary |
|
211 |
-
| [`my_trained_model/vocab/strings.json`](my_trained_model/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary |
|
212 |
-
| [`my_trained_model/vocab/vectors`](my_trained_model/vocab/vectors) | Local | Directory containing vector files for the vocabulary |
|
213 |
-
| [`my_trained_model/vocab/vectors.cfg`](my_trained_model/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary |
|
214 |
-
| [`my_trained_model/config.cfg`](my_trained_model/config.cfg) | Local | Configuration file for the trained model |
|
215 |
-
| [`my_trained_model/meta.json`](my_trained_model/meta.json) | Local | JSON file containing metadata for the trained model |
|
216 |
-
| [`my_trained_model/tokenizer`](my_trained_model/tokenizer) | Local | Tokenizer files for the trained model |
|
217 |
-
| [`output/experiment1/model-best/textcat_multilabel/cfg`](output/experiment1/model-best/textcat_multilabel/cfg) | Local | Configuration files for the best model in experiment 1 |
|
218 |
-
| [`output/experiment1/model-best/textcat_multilabel/model`](output/experiment1/model-best/textcat_multilabel/model) | Local | Trained model files for the best model in experiment 1 |
|
219 |
-
| [`output/experiment1/model-best/vocab/key2row`](output/experiment1/model-best/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the best model in experiment 1 |
|
220 |
-
| [`output/experiment1/model-best/vocab/lookups.bin`](output/experiment1/model-best/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the best model in experiment 1 |
|
221 |
-
| [`output/experiment1/model-best/vocab/strings.json`](output/experiment1/model-best/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the best model in experiment 1 |
|
222 |
-
| [`output/experiment1/model-best/vocab/vectors`](output/experiment1/model-best/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the best model in experiment 1 |
|
223 |
-
| [`output/experiment1/model-best/vocab/vectors.cfg`](output/experiment1/model-best/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the best model in experiment 1 |
|
224 |
-
| [`output/experiment1/model-best/config.cfg`](output/experiment1/model-best/config.cfg) | Local | Configuration file for the best model in experiment 1 |
|
225 |
-
| [`output/experiment1/model-best/meta.json`](output/experiment1/model-best/meta.json) | Local | JSON file containing metadata for the best model in experiment 1 |
|
226 |
-
| [`output/experiment1/model-best/tokenizer`](output/experiment1/model-best/tokenizer) | Local | Tokenizer files for the best model in experiment 1 |
|
227 |
-
| [`output/experiment1/model-last/textcat_multilabel/cfg`](output/experiment1/model-last/textcat_multilabel/cfg) | Local | Configuration files for the last model in experiment 1 |
|
228 |
-
| [`output/experiment1/model-last/textcat_multilabel/model`](output/experiment1/model-last/textcat_multilabel/model) | Local | Trained model files for the last model in experiment 1 |
|
229 |
-
| [`output/experiment1/model-last/vocab/key2row`](output/experiment1/model-last/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the last model in experiment 1 |
|
230 |
-
| [`output/experiment1/model-last/vocab/lookups.bin`](output/experiment1/model-last/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the last model in experiment 1 |
|
231 |
-
| [`output/experiment1/model-last/vocab/strings.json`](output/experiment1/model-last/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the last model in experiment 1 |
|
232 |
-
| [`output/experiment1/model-last/vocab/vectors`](output/experiment1/model-last/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the last model in experiment 1 |
|
233 |
-
| [`output/experiment1/model-last/vocab/vectors.cfg`](output/experiment1/model-last/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the last model in experiment 1 |
|
234 |
-
| [`output/experiment1/model-last/config.cfg`](output/experiment1/model-last/config.cfg) | Local | Configuration file for the last model in experiment 1 |
|
235 |
-
| [`output/experiment1/model-last/meta.json`](output/experiment1/model-last/meta.json) | Local | JSON file containing metadata for the last model in experiment 1 |
|
236 |
-
| [`output/experiment1/model-last/tokenizer`](output/experiment1/model-last/tokenizer) | Local | Tokenizer files for the last model in experiment 1 |
|
237 |
-
| [`output/experiment3/model-best/textcat_multilabel/cfg`](output/experiment3/model-best/textcat_multilabel/cfg) | Local | Configuration files for the best model in experiment 3 |
|
238 |
-
| [`output/experiment3/model-best/textcat_multilabel/model`](output/experiment3/model-best/textcat_multilabel/model) | Local | Trained model files for the best model in experiment 3 |
|
239 |
-
| [`output/experiment3/model-best/vocab/key2row`](output/experiment3/model-best/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the best model in experiment 3 |
|
240 |
-
| [`output/experiment3/model-best/vocab/lookups.bin`](output/experiment3/model-best/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the best model in experiment 3 |
|
241 |
-
| [`output/experiment3/model-best/vocab/strings.json`](output/experiment3/model-best/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the best model in experiment 3 |
|
242 |
-
| [`output/experiment3/model-best/vocab/vectors`](output/experiment3/model-best/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the best model in experiment 3 |
|
243 |
-
| [`output/experiment3/model-best/vocab/vectors.cfg`](output/experiment3/model-best/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the best model in experiment 3 |
|
244 |
-
| [`output/experiment3/model-best/config.cfg`](output/experiment3/model-best/config.cfg) | Local | Configuration file for the best model in experiment 3 |
|
245 |
-
| [`output/experiment3/model-best/meta.json`](output/experiment3/model-best/meta.json) | Local | JSON file containing metadata for the best model in experiment 3 |
|
246 |
-
| [`output/experiment3/model-best/tokenizer`](output/experiment3/model-best/tokenizer) | Local | Tokenizer files for the best model in experiment 3 |
|
247 |
-
| [`output/experiment3/model-last/textcat_multilabel/cfg`](output/experiment3/model-last/textcat_multilabel/cfg) | Local | Configuration files for the last model in experiment 3 |
|
248 |
-
| [`output/experiment3/model-last/textcat_multilabel/model`](output/experiment3/model-last/textcat_multilabel/model) | Local | Trained model files for the last model in experiment 3 |
|
249 |
-
| [`output/experiment3/model-last/vocab/key2row`](output/experiment3/model-last/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the last model in experiment 3 |
|
250 |
-
| [`output/experiment3/model-last/vocab/lookups.bin`](output/experiment3/model-last/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the last model in experiment 3 |
|
251 |
-
| [`output/experiment3/model-last/vocab/strings.json`](output/experiment3/model-last/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the last model in experiment 3 |
|
252 |
-
| [`output/experiment3/model-last/vocab/vectors`](output/experiment3/model-last/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the last model in experiment 3 |
|
253 |
-
| [`output/experiment3/model-last/vocab/vectors.cfg`](output/experiment3/model-last/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the last model in experiment 3 |
|
254 |
-
| [`output/experiment3/model-last/config.cfg`](output/experiment3/model-last/config.cfg) | Local | Configuration file for the last model in experiment 3 |
|
255 |
-
| [`output/experiment3/model-last/meta.json`](output/experiment3/model-last/meta.json) | Local | JSON file containing metadata for the last model in experiment 3 |
|
256 |
-
| [`output/experiment3/model-last/tokenizer`](output/experiment3/model-last/tokenizer) | Local | Tokenizer files for the last model in experiment 3 |
|
257 |
-
| [`python_Code/finalStep-formatLabel.py`](python_Code/finalStep-formatLabel.py) | Local | Python script for formatting labeled data in the final step |
|
258 |
-
| [`python_Code/firstStep-format.py`](python_Code/firstStep-format.py) | Local | Python script for formatting data in the first step |
|
259 |
-
| [`python_Code/five_examples_annotated.ipynb`](python_Code/five_examples_annotated.ipynb) | Local | Jupyter notebook containing five annotated examples |
|
260 |
-
| [`python_Code/secondStep-score.py`](python_Code/secondStep-score.py) | Local | Python script for scoring data in the second step |
|
261 |
-
| [`python_Code/thirdStep-label.py`](python_Code/thirdStep-label.py) | Local | Python script for labeling data in the third step |
|
262 |
-
| [`python_Code/train_eval_split.ipynb`](python_Code/train_eval_split.ipynb) | Local | Jupyter notebook for training and evaluation data splitting |
|
263 |
-
| [`TerminalCode.txt`](TerminalCode.txt) | Local | Text file containing terminal code |
|
264 |
-
| [`README.md`](README.md) | Local | Markdown file containing project documentation |
|
265 |
-
| [`prodigy.json`](prodigy.json) | Local | JSON file containing Prodigy configuration |
|
266 |
-
|
267 |
-
<!-- WEASEL: AUTO-GENERATED DOCS END (do not remove) -->
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
tags:
|
4 |
+
- machine learning
|
5 |
+
- natural language processing
|
6 |
+
- huggingface
|
7 |
+
---
|
8 |
+
|
9 |
+
# prodigy-ecfr-textcat
|
10 |
+
|
11 |
+
## About the Project
|
12 |
+
|
13 |
+
Our goal is to organize these financial institution rules and regulations so financial institutions can go through newly created rules and regulations to know which departments to send the information to and to allow easy retrieval of these regulations when necessary. Text mining and information retrieval will allow a large step of the process to be automated. Automating these steps will allow less time and effort to be contributed for financial institutions employees. This allows more time and work to be used to accomplish other projects.
|
14 |
+
|
15 |
+
## Table of Contents
|
16 |
+
|
17 |
+
- [About the Project](#about-the-project)
|
18 |
+
- [Getting Started](#getting-started)
|
19 |
+
- [Prerequisites](#prerequisites)
|
20 |
+
- [Installation](#installation)
|
21 |
+
- [Usage](#usage)
|
22 |
+
- [File Structure](#file-structure)
|
23 |
+
- [License](#license)
|
24 |
+
- [Acknowledgements](#acknowledgements)
|
25 |
+
|
26 |
+
## Getting Started
|
27 |
+
|
28 |
+
Instructions on setting up the project on a local machine.
|
29 |
+
|
30 |
+
### Prerequisites
|
31 |
+
|
32 |
+
Before running the project, ensure you have the following software dependencies installed:
|
33 |
+
- [Python 3.x](https://www.python.org/downloads/)
|
34 |
+
- [spaCy](https://spacy.io/usage)
|
35 |
+
- [Prodigy](https://prodi.gy/docs/) (optional)
|
36 |
+
|
37 |
+
### Installation
|
38 |
+
|
39 |
+
Follow these step-by-step instructions to install and configure the project:
|
40 |
+
|
41 |
+
1. **Clone this repository to your local machine.**
|
42 |
+
```bash
|
43 |
+
git clone <https://github.com/ManjinderUNCC/prodigy-ecfr-textcat.git>
|
44 |
+
2. Install the required dependencies by running:
|
45 |
+
```bash
|
46 |
+
pip install -r requirements.txt
|
47 |
+
```
|
48 |
+
|
49 |
+
## Usage
|
50 |
+
|
51 |
+
To use the project, follow these steps:
|
52 |
+
|
53 |
+
1. **Prepare your data:**
|
54 |
+
- Place your dataset files in the `/data` directory.
|
55 |
+
- Optionally, annotate your data using Prodigy and save the annotations in the `/data` directory.
|
56 |
+
|
57 |
+
2. **Train the text classification model:**
|
58 |
+
- Run the training script located in the `/python_Code` directory.
|
59 |
+
|
60 |
+
3. **Evaluate the model:**
|
61 |
+
- Use the evaluation script to assess the model's performance on labeled data.
|
62 |
+
|
63 |
+
4. **Make predictions:**
|
64 |
+
- Apply the trained model to new, unlabeled data to classify it into relevant categories.
|
65 |
+
|
66 |
+
|
67 |
+
## File Structure
|
68 |
+
|
69 |
+
Describe the organization of files and directories within the project.
|
70 |
+
|
71 |
+
- `/corpus`
|
72 |
+
- `/labels`
|
73 |
+
- `ner.json`
|
74 |
+
- `parser.json`
|
75 |
+
- `tagger.json`
|
76 |
+
- `textcat_multilabel.json`
|
77 |
+
- `/data`
|
78 |
+
- `eval.jsonl`
|
79 |
+
- `firstStep_file.jsonl`
|
80 |
+
- `five_examples_annotated5.jsonl`
|
81 |
+
- `goldenEval.jsonl`
|
82 |
+
- `thirdStep_file.jsonl`
|
83 |
+
- `train.jsonl`
|
84 |
+
- `train200.jsonl`
|
85 |
+
- `train4465.jsonl`
|
86 |
+
- `/my_trained_model`
|
87 |
+
- `/textcat_multilabel`
|
88 |
+
- `cfg`
|
89 |
+
- `model`
|
90 |
+
- `/vocab`
|
91 |
+
- `key2row`
|
92 |
+
- `lookups.bin`
|
93 |
+
- `strings.json`
|
94 |
+
- `vectors`
|
95 |
+
- `vectors.cfg`
|
96 |
+
- `config.cfg`
|
97 |
+
- `meta.json`
|
98 |
+
- `tokenizer`
|
99 |
+
- `/output`
|
100 |
+
- `/experiment1`
|
101 |
+
- `/model-best`
|
102 |
+
- `/textcat_multilabel`
|
103 |
+
- `cfg`
|
104 |
+
- `model`
|
105 |
+
- `/vocab`
|
106 |
+
- `key2row`
|
107 |
+
- `lookups.bin`
|
108 |
+
- `strings.json`
|
109 |
+
- `vectors`
|
110 |
+
- `vectors.cfg`
|
111 |
+
- `config.cfg`
|
112 |
+
- `meta.json`
|
113 |
+
- `tokenizer`
|
114 |
+
- `/model-last`
|
115 |
+
- `/textcat_multilabel`
|
116 |
+
- `cfg`
|
117 |
+
- `model`
|
118 |
+
- `/vocab`
|
119 |
+
- `key2row`
|
120 |
+
- `lookups.bin`
|
121 |
+
- `strings.json`
|
122 |
+
- `vectors`
|
123 |
+
- `vectors.cfg`
|
124 |
+
- `config.cfg`
|
125 |
+
- `meta.json`
|
126 |
+
- `tokenizer`
|
127 |
+
- `/experiment3`
|
128 |
+
- `/model-best`
|
129 |
+
- `/textcat_multilabel`
|
130 |
+
- `cfg`
|
131 |
+
- `model`
|
132 |
+
- `/vocab`
|
133 |
+
- `key2row`
|
134 |
+
- `lookups.bin`
|
135 |
+
- `strings.json`
|
136 |
+
- `vectors`
|
137 |
+
- `vectors.cfg`
|
138 |
+
- `config.cfg`
|
139 |
+
- `meta.json`
|
140 |
+
- `tokenizer`
|
141 |
+
- `/model-last`
|
142 |
+
- `/textcat_multilabel`
|
143 |
+
- `cfg`
|
144 |
+
- `model`
|
145 |
+
- `/vocab`
|
146 |
+
- `key2row`
|
147 |
+
- `lookups.bin`
|
148 |
+
- `strings.json`
|
149 |
+
- `vectors`
|
150 |
+
- `vectors.cfg`
|
151 |
+
- `config.cfg`
|
152 |
+
- `meta.json`
|
153 |
+
- `tokenizer`
|
154 |
+
- `/python_Code`
|
155 |
+
- `finalStep-formatLabel.py`
|
156 |
+
- `firstStep-format.py`
|
157 |
+
- `five_examples_annotated.ipynb`
|
158 |
+
- `secondStep-score.py`
|
159 |
+
- `thirdStep-label.py`
|
160 |
+
- `train_eval_split.ipynb`
|
161 |
+
- `TerminalCode.txt`
|
162 |
+
- `requirements.txt`
|
163 |
+
- `Terminal Commands vs Project.yml`
|
164 |
+
- `Project.yml`
|
165 |
+
- `README.md`
|
166 |
+
- `prodigy.json`
|
167 |
+
|
168 |
+
## License
|
169 |
+
|
170 |
+
- Package A: MIT License
|
171 |
+
- Package B: Apache License 2.0
|
172 |
+
|
173 |
+
## Acknowledgements
|
174 |
+
|
175 |
+
Manjinder Sandhu, Dagim Bantikassegn, Alex Brooks, Tyler Dabbs
|
176 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|