--- datasets: - ckcl/BTC_USDT_dataset metrics: - bertscore base_model: - google-bert/bert-base-uncased - meta-llama/Llama-3.1-8B-Instruct tags: - prediction license: mit library_name: adapter-transformers new_version: meta-llama/Llama-3.1-8B-Instruct --- # Custom Transformer Model for MEXC Price Prediction ## [中文版](CN_README.md) https://huggingface.co/ckcl/mexc_price_model/blob/main/CN_README.md ## Model Description This model is a custom Transformer model designed to predict MEXC contract prices. It consists of an embedding layer followed by multiple Transformer encoder layers, and a fully connected layer at the end to produce the output. ## Model Architecture - **Input Dimension:** 13 - **Model Dimension:** 64 - **Number of Heads:** 8 - **Number of Layers:** 2 - **Output Dimension:** 1 ## Training Data The model was trained on historical MEXC contract transaction data. The features include open, close, high, low prices, volume, amount, real open, real close, real high, real low prices, and moving averages. ## Training Details - **Optimizer:** Adam - **Learning Rate:** 0.001 - **Loss Function:** Mean Squared Error (MSE) - **Batch Size:** 32 - **Number of Epochs:** 50 ## Usage To use this model for prediction, follow these steps: 1. Load the model and configuration: ```python import torch import torch.nn as nn from transformers import AutoConfig class CustomTransformerModel(nn.Module): def __init__(self, config): super(CustomTransformerModel, self).__init__() self.embedding = nn.Linear(config.input_dim, config.model_dim) self.encoder_layer = nn.TransformerEncoderLayer(d_model=config.model_dim, nhead=config.num_heads, batch_first=True) self.transformer_encoder = nn.TransformerEncoder(self.encoder_layer, num_layers=config.num_layers) self.fc = nn.Linear(config.model_dim, config.output_dim) def forward(self, src): src = self.embedding(src) output = self.transformer_encoder(src) output = self.fc(output[:, -1, :]) return output config = AutoConfig.from_pretrained("your-username/mexc_price_model", config_file_name="BTC_USDT.json") model = CustomTransformerModel(config) model.load_state_dict(torch.load("model_repo/mexc_price.pth")) model.eval() ``` 2. Prepare input data and make predictions: ```python import numpy as np from sklearn.preprocessing import StandardScaler new_data = np.array([ [1.727087e+09, 63483.9, 63426.2, 63483.9, 63411.6, 1193897.0, 7.575486e+06, 63483.8, 63426.2, 63483.9, 63411.6, 0.00, 0.0, 0.0] ]) scaler = StandardScaler() new_data_scaled = scaler.fit_transform(new_data) input_tensor = torch.tensor(new_data_scaled, dtype=torch.float32).unsqueeze(1) with torch.no_grad(): prediction = model(input_tensor) predicted_value = prediction.squeeze().item() print(f"Predicted Value: {predicted_value}") ``` ## License This model is licensed under the [MIT License](LICENSE).