Edit model card

How to run

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp

make -j<CPU counts-2>

./llama-cli -m name.gguf -n 256 --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-bob.txt

117M model result is unuseful


system_info: n_threads = 4 / 8 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | 
main: interactive mode on.
Reverse prompt: 'User:'
sampling: 
    repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
    top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
    mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order: 
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature 
generate: n_ctx = 1024, n_batch = 2048, n_predict = 256, n_keep = 0


== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to the AI.
 - To return control without starting a new line, end your input with '/'.
 - If you want to submit another line, end your input with '\'.

Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.

User: Hello, Bob.
Bob: Hello. How may I help you today?
User: Please tell me the largest city in Europe.
Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.
User:What is the largest city in Australia?
Bob: The biggest city in Australia is New York City.

User:New York is a city of US

Bob: The US is a city of the US.

User:thanks

User, you do have a question.

User, you have a question.

Bob: Alright. You are an early user.

User:

llama_print_timings:        load time =      29.65 ms
llama_print_timings:      sample time =       2.09 ms /    66 runs   (    0.03 ms per token, 31548.76 tokens per second)
llama_print_timings: prompt eval time =   25528.34 ms /   116 tokens (  220.07 ms per token,     4.54 tokens per second)
llama_print_timings:        eval time =     212.84 ms /    63 runs   (    3.38 ms per token,   296.00 tokens per second)
llama_print_timings:       total time =   69083.22 ms /   179 tokens
(llama.cpp-4B8ytfKj-py3.10) ec2-user@ip-10-110-145-102:~/workspace/gguf/llama.cpp$ 
Downloads last month
63
GGUF
Model size
163M params
Architecture
gpt2

4-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including aisuko/gpt2-117M-gguf