--- language: - ja tags: - causal-lm - not-for-all-audiences - nsfw pipeline_tag: text-generation --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Berghof-NSFW-7B-GGUF This is quantized version of [Elizezen/Berghof-NSFW-7B](https://huggingface.co/Elizezen/Berghof-NSFW-7B) created using llama.cpp # Original Model Card # Berghof NSFW 7B drawing ## Model Description 多分これが一番強いと思います ## Usage Ensure you are using Transformers 4.34.0 or newer. ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Elizezen/Berghof-NSFW-7B") model = AutoModelForCausalLM.from_pretrained( "Elizezen/Berghof-NSFW-7B", torch_dtype="auto", ) model.eval() if torch.cuda.is_available(): model = model.to("cuda") input_ids = tokenizer.encode( "吾輩は猫である。名前はまだない",, add_special_tokens=True, return_tensors="pt" ) tokens = model.generate( input_ids.to(device=model.device), max_new_tokens=512, temperature=1, top_p=0.95, do_sample=True, ) out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip() print(out) ``` ### Intended Use The model is mainly intended to be used for generating novels. It may not be so capable with instruction-based responses.