Edit model card

GPT-4o Tokenizer

A πŸ€—-compatible version of the GPT-4o tokenizer (adapted from openai/tiktoken). This means it can be used with Hugging Face libraries including Transformers, Tokenizers, and Transformers.js.

Example usage:

Transformers/Tokenizers

from transformers import GPT2TokenizerFast

tokenizer = GPT2TokenizerFast.from_pretrained('Xenova/gpt-4o')
assert tokenizer.encode('hello world') == [24912, 2375]

Transformers.js

import { AutoTokenizer } from '@xenova/transformers';

const tokenizer = await AutoTokenizer.from_pretrained('Xenova/gpt-4o');
const tokens = tokenizer.encode('hello world'); // [24912, 2375]
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Xenova/gpt-4o

Finetunes
1 model

Spaces using Xenova/gpt-4o 12