We present our last article “Data re-uploading for a universal quantum classifier”, by A. Pérez-Salinas, A. Cervera-Lierta, E. Gil-Fuster and J. I. Latorre. It is available in arXiv:1907.02085 and SciRate

The main result of this work is to show that there is a trade-off between the number of qubits needed to perform classification and multiple data re-uploading.

The quantum classifier we built can be understood as a modification of a Neural Network. In feed-forward neural networks (NN), each data point is entered and processed in each neuron. If NN were affected by the no-cloning theorem, they couldn’t work as they do. To build a quantum classifier (QClass), we need to load classical data several times along the computation.

To upload and process data in the QClass, we use a general unitary gate. Each of these gates (called “layer L”) introduces the data points “x” and the processing parameters “phi” that should be adjusted by using some cost function.

We train a single-qubit QClass by dividing the Bloch Sphere into several regions, one for each class, and fine-tune the processing parameters to distribute each data point to its corresponding region. We choose these regions to be maximally orthogonal.

Then, we can define the cost function as the fidelity of the final state of the QClass and the corresponding “class state”. We propose two ways to do that, which can be found in the article. A single-qubit QClass can’t represent any quantum advantage, although, for its simplicity, could be a part of larger circuits. However, this QClass can be generalized to multi-qubit QClass, where the introduction of entanglement will improve the classification procedure. Once we have defined the QClass and the cost function, we need to use a classical minimization method to find the processing parameters. The QClass belongs to the family of parametrized quantum circuits, as the VQE or Qautoencoder. We have used the L-BFGS-B algorithm from scipy.

Benchmark: we have tested the single- and multi-qubit QClass composed by a different number of layers in several problems with different characteristics.