• id: number
Token ID from the model tokenizer
inference/src/tasks/nlp/textGenerationStream.ts:21
• Optional
logprob: number
Logprob Optional since the logprob of the first token cannot be computed
inference/src/tasks/nlp/textGenerationStream.ts:28
• text: string
Token text
inference/src/tasks/nlp/textGenerationStream.ts:23
< > Update on GitHub