Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,9 @@ tags:
|
|
5 |
- portrait-matting
|
6 |
---
|
7 |
|
8 |
-
|
|
|
|
|
9 |
|
10 |
## Usage (Transformers.js)
|
11 |
|
@@ -17,11 +19,11 @@ npm i @xenova/transformers
|
|
17 |
You can then use the model for portrait matting, as follows:
|
18 |
|
19 |
```js
|
20 |
-
import { AutoProcessor, RawImage
|
21 |
|
22 |
// Load model and processor
|
23 |
-
const model = await AutoModel.from_pretrained('Xenova/modnet
|
24 |
-
const processor = await AutoProcessor.from_pretrained('Xenova/modnet
|
25 |
|
26 |
// Load image from URL
|
27 |
const url = 'https://images.pexels.com/photos/5965592/pexels-photo-5965592.jpeg?auto=compress&cs=tinysrgb&w=1024';
|
@@ -45,8 +47,3 @@ mask.save('mask.png');
|
|
45 |
---
|
46 |
|
47 |
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
For more information, see the original [repo](https://github.com/ZHKKKe/MODNet) and example [colab](https://colab.research.google.com/drive/1P3cWtg8fnmu9karZHYDAtmm1vj1rgA-f?usp=sharing).
|
|
|
5 |
- portrait-matting
|
6 |
---
|
7 |
|
8 |
+
# MODNet: Trimap-Free Portrait Matting in Real Time
|
9 |
+
|
10 |
+
For more information, see the original [repo](https://github.com/ZHKKKe/MODNet) and example [colab](https://colab.research.google.com/drive/1P3cWtg8fnmu9karZHYDAtmm1vj1rgA-f?usp=sharing).
|
11 |
|
12 |
## Usage (Transformers.js)
|
13 |
|
|
|
19 |
You can then use the model for portrait matting, as follows:
|
20 |
|
21 |
```js
|
22 |
+
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
|
23 |
|
24 |
// Load model and processor
|
25 |
+
const model = await AutoModel.from_pretrained('Xenova/modnet', { quantized: false });
|
26 |
+
const processor = await AutoProcessor.from_pretrained('Xenova/modnet');
|
27 |
|
28 |
// Load image from URL
|
29 |
const url = 'https://images.pexels.com/photos/5965592/pexels-photo-5965592.jpeg?auto=compress&cs=tinysrgb&w=1024';
|
|
|
47 |
---
|
48 |
|
49 |
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
|
|
|
|
|
|
|
|
|