turingmachine
commited on
Commit
•
3fdf05b
1
Parent(s):
c7d01aa
Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ You can use this model directly with a pipeline for masked language modeling:
|
|
25 |
|
26 |
```python
|
27 |
from transformers import pipeline
|
28 |
-
summarizer = pipeline(task="summarization", model="
|
29 |
|
30 |
TEXT = "1. An optical coherent receiver for an optical communication network, said optical coherent receiver being configured to receive a modulated optical signal and to process said modulated optical signal for generating an in-phase component and a quadrature component, said in-phase component and said quadrature component being electrical signals, said optical coherent receiver comprising a power adjuster in turn comprising: a multiplying unit configured to multiply said in-phase component by an in-phase gain thereby providing a power-adjusted in-phase component, and to multiply said quadrature component by a quadrature gain thereby providing a power-adjusted quadrature component; and a digital circuit connected between output and input of said multiplying unit and configured to compute: a common gain indicative of a sum of a power of said power-adjusted in-phase component and a power of said power-adjusted quadrature component, and a differential gain indicative of a difference between said power of said power-adjusted in-phase component and said power of said power-adjusted quadrature component; and said in-phase gain as a product between said common gain and said differential gain, and said quadrature gain as a ratio between said common gain and said differential gain. 2. An optical coherent receiver according to claim 1, wherein it further comprises an analog-to-digital unit connected at the input of said power adjuster, said analog-to-digital unit being configured to ..."
|
31 |
|
@@ -44,8 +44,8 @@ import torch
|
|
44 |
from transformers import AutoTokenizer, AutoModelWithLMHead
|
45 |
# cuda/cpu
|
46 |
device = 'cuda' if torch.cuda.is_available() else 'cpu'
|
47 |
-
tokenizer = AutoTokenizer.from_pretrained("
|
48 |
-
model = AutoModelWithLMHead.from_pretrained("
|
49 |
|
50 |
inputs = tokenizer(TEXT, return_tensors="pt").to(device)
|
51 |
|
|
|
25 |
|
26 |
```python
|
27 |
from transformers import pipeline
|
28 |
+
summarizer = pipeline(task="summarization", model="hupd/hupd-t5-small")
|
29 |
|
30 |
TEXT = "1. An optical coherent receiver for an optical communication network, said optical coherent receiver being configured to receive a modulated optical signal and to process said modulated optical signal for generating an in-phase component and a quadrature component, said in-phase component and said quadrature component being electrical signals, said optical coherent receiver comprising a power adjuster in turn comprising: a multiplying unit configured to multiply said in-phase component by an in-phase gain thereby providing a power-adjusted in-phase component, and to multiply said quadrature component by a quadrature gain thereby providing a power-adjusted quadrature component; and a digital circuit connected between output and input of said multiplying unit and configured to compute: a common gain indicative of a sum of a power of said power-adjusted in-phase component and a power of said power-adjusted quadrature component, and a differential gain indicative of a difference between said power of said power-adjusted in-phase component and said power of said power-adjusted quadrature component; and said in-phase gain as a product between said common gain and said differential gain, and said quadrature gain as a ratio between said common gain and said differential gain. 2. An optical coherent receiver according to claim 1, wherein it further comprises an analog-to-digital unit connected at the input of said power adjuster, said analog-to-digital unit being configured to ..."
|
31 |
|
|
|
44 |
from transformers import AutoTokenizer, AutoModelWithLMHead
|
45 |
# cuda/cpu
|
46 |
device = 'cuda' if torch.cuda.is_available() else 'cpu'
|
47 |
+
tokenizer = AutoTokenizer.from_pretrained("hupd/hupd-t5-small")
|
48 |
+
model = AutoModelWithLMHead.from_pretrained("hupd/hupd-t5-small").to(device)
|
49 |
|
50 |
inputs = tokenizer(TEXT, return_tensors="pt").to(device)
|
51 |
|