Model description
[More Information Needed]
Intended uses & limitations
[More Information Needed]
Training Procedure
[More Information Needed]
Hyperparameters
Click to expand
Hyperparameter | Value |
---|---|
memory | |
steps | [('scaler', StandardScaler()), ('svm', RandomForestClassifier())] |
verbose | False |
scaler | StandardScaler() |
svm | RandomForestClassifier() |
scaler__copy | True |
scaler__with_mean | True |
scaler__with_std | True |
svm__bootstrap | True |
svm__ccp_alpha | 0.0 |
svm__class_weight | |
svm__criterion | gini |
svm__max_depth | |
svm__max_features | sqrt |
svm__max_leaf_nodes | |
svm__max_samples | |
svm__min_impurity_decrease | 0.0 |
svm__min_samples_leaf | 1 |
svm__min_samples_split | 2 |
svm__min_weight_fraction_leaf | 0.0 |
svm__monotonic_cst | |
svm__n_estimators | 100 |
svm__n_jobs | |
svm__oob_score | False |
svm__random_state | |
svm__verbose | 0 |
svm__warm_start | False |
Model Plot
Pipeline(steps=[('scaler', StandardScaler()),('svm', RandomForestClassifier())])In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Pipeline(steps=[('scaler', StandardScaler()),('svm', RandomForestClassifier())])
StandardScaler()
RandomForestClassifier()
Evaluation Results
Metric | Value |
---|---|
accuracy | 0.860392 |
f1 score | 0.769606 |
precision | 0.819363 |
recall | 0.725547 |
How to Get Started with the Model
[More Information Needed]
Model Card Authors
This model card is written by following authors:
[More Information Needed]
Model Card Contact
You can contact the model card authors through following channels: [More Information Needed]
Citation
Below you can find information related to citation.
BibTeX:
[More Information Needed]
eval_method
The model is evaluated using test split, on accuracy, precision, recall and f1.
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.