--- license: llama3 datasets: - REILX/extracted_tagengo_gpt4 - TigerResearch/sft_zh - alexl83/AlpacaDataCleaned - LooksJuicy/ruozhiba - silk-road/alpaca-data-gpt4-chinese - databricks/databricks-dolly-15k - microsoft/orca-math-word-problems-200k - Sao10K/Claude-3-Opus-Instruct-5K language: - zh - en --- ### 数据集 使用以下8个数据集 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636f54b95d2050767e4a6317/OkuVQ1lWXRAKyel2Ef0Fz.png) 对Llama-3-8B-Instruct进行微调并测试,结果显示,微调后的模型在CEVAL和MMLU的评分上均有所提升。 ### 基础模型: - https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct ### 训练工具 https://github.com/hiyouga/LLaMA-Factory ### 测评方式: 使用opencompass(https://github.com/open-compass/OpenCompass/ ), 测试工具基于CEval和MMLU对微调之后的模型和原始模型进行测试。
测试模型分别为: - Llama-3-8B - Llama-3-8B-Instruct - Llama-3-8B-Instruct-750Mb-lora, 使用8DataSets数据集对Llama-3-8B-Instruct模型进行sft方式lora微调 ### 测试机器 8*A800 ### 8DataSets数据集: 大约750Mb的微调数据集 - https://huggingface.co/datasets/REILX/extracted_tagengo_gpt4 - https://huggingface.co/datasets/TigerResearch/sft_zh - https://huggingface.co/datasets/silk-road/alpaca-data-gpt4-chinese - https://huggingface.co/datasets/LooksJuicy/ruozhiba - https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k - https://huggingface.co/datasets/alexl83/AlpacaDataCleaned - https://huggingface.co/datasets/Sao10K/Claude-3-Opus-Instruct-5K