File size: 2,892 Bytes
1a7ff05
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
license: apache-2.0
language:
- am
- sw
- wo
tags:
- image recognition
- image classification
- hand writen digits
- MNIST dataset
- quantum machine learning
- quantum ML
- quantum boltzmann machine
- QRBM
---

# Quantum_rbm_mnist

## Model description
This model is a pre-trained instance of the Quantum-Restricted-Boltzmann-Machine, focused in image recognition tasks of handwritten digits demonstrating quantum machine learning. 

## Training data

The model was pre-trained using the well known MNIST dataset. MNIST is a widely used dataset of handwritten digits that contains 60,000 handwritten digits for training a machine learning model and 10,000 handwritten digits for testing the model. It was introduced in 1998 and has become a standard benchmark for classification tasks.

## Model Architecture

This model uses only one DynexQRBM PyTorch layer with 300 hidden nodes, combined with simple transfer learning using logistic regression. Due to its quantum algorithm approach, it evolves in only 1 training epoch to a training accuracy >99% and test accuracy >96%, which improves within a few more iterations.

## Usage
This model is used for image recognition tasks of handwritten digits. It serves as a demonstration of quantum based machine learning algorithms and their effectiveness. To use this model:

```
from sklearn.linear_model import LogisticRegression
from HybridQRBM.pytorchdnx import dnx
from HybridQRBM.optimizers import RBMOptimizer
from HybridQRBM.samplers import DynexSampler

testmodel = torch.load('quantum_rbm_mnist.pth');
_, features = testmodel.dnxlayer.sampler.predict(data, num_particles=10,num_gibbs_updates=1)

# extract hidden layers from QRBM:
hidden, prob_hidden = testmodel.dnxlayer.sampler.infer(data)
# Logistic Regression classifier on hidden nodes:
from sklearn.linear_model import LogisticRegression
t = hidden * prob_hidden
clf = LogisticRegression(max_iter=10000)
clf.fit(t, data_labels)
predictions = clf.predict(t)
print('Accuracy:', (sum(predictions == data_labels) / data_labels.shape[0]) * 100,'%')

# plot reconstructed images:
fig = plt.figure(figsize=(10, 7));
fig.suptitle('Reconstructed Dataset (50 samples)', fontsize=16)
rows = 5;
columns = 10;
for i in range(0,50):
    fig.add_subplot(rows, columns, i+1)
    plt.imshow(features[i].reshape(28,28))
    marker=str(predictions[i])+' (t='+str(data_labels[i])+')'
    plt.title(marker)
    plt.axis('off');
plt.show()
```

## Performance

The model's performance was evaluated by accuracy. The effectiveness of the QRBM training is visible after a 1 training iteration and improves further within just a few more epochs.

## Limitations

This model is for demonstration purposes and uses binary encoding. It can be easily modified to support color images by converting float values to its binary representation (Qubovert, Qubolite and other packages provide functions for this).