import gradio as gr from huggingface_hub import InferenceClient import openai # OpenAI API를 사용하기 위해 추가 import os import random import logging # 로깅 설정 logging.basicConfig(filename='language_model_playground.log', level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s') # 모델 목록 MODELS = { "Zephyr 7B Beta": "HuggingFaceH4/zephyr-7b-beta", "DeepSeek Coder V2": "deepseek-ai/DeepSeek-Coder-V2-Instruct", "Meta Llama 3.1 8B": "meta-llama/Meta-Llama-3.1-8B-Instruct", "Meta-Llama 3.1 70B-Instruct": "meta-llama/Meta-Llama-3.1-70B-Instruct", "Microsoft": "microsoft/Phi-3-mini-4k-instruct", "Mixtral 8x7B": "mistralai/Mistral-7B-Instruct-v0.3", "Mixtral Nous-Hermes": "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO", "Cohere Command R+": "CohereForAI/c4ai-command-r-plus", "Aya-23-35B": "CohereForAI/aya-23-35B", "GPT-4o Mini": "gpt-4o-mini" # 새로운 모델 추가 } # HuggingFace 토큰 설정 hf_token = os.getenv("HF_TOKEN") if not hf_token: raise ValueError("HF_TOKEN 환경 변수가 설정되지 않았습니다.") # OpenAI API 키 설정 openai.api_key = os.getenv("OPENAI_API_KEY") def call_hf_api(prompt, reference_text, max_tokens, temperature, top_p, model): if model == "gpt-4o-mini": return call_openai_api(prompt, reference_text, max_tokens, temperature, top_p) client = InferenceClient(model=model, token=hf_token) combined_prompt = f"{prompt}\n\n참고 텍스트:\n{reference_text}" random_seed = random.randint(0, 1000000) try: response = client.text_generation( combined_prompt, max_new_tokens=max_tokens, temperature=temperature, top_p=top_p, seed=random_seed ) return response except Exception as e: logging.error(f"HuggingFace API 호출 중 오류 발생: {str(e)}") return f"응답 생성 중 오류 발생: {str(e)}. 나중에 다시 시도해 주세요." def call_openai_api(content, system_message, max_tokens, temperature, top_p): response = openai.ChatCompletion.create( model="gpt-4o-mini", # 모델 ID messages=[ {"role": "system", "content": system_message}, {"role": "user", "content": content}, ], max_tokens=max_tokens, temperature=temperature, top_p=top_p, ) return response.choices[0].message['content'] def generate_response(prompt, reference_text, max_tokens, temperature, top_p, model): if model == "GPT-4o Mini": system_message = "이것은 사용자 요청에 대한 참고 텍스트를 활용하여 응답을 생성하는 작업입니다." response = call_openai_api(prompt, reference_text, max_tokens, temperature, top_p) else: response = call_hf_api(prompt, reference_text, max_tokens, temperature, top_p, MODELS[model]) response_html = f"""