Quantum Computer Emulator, part 1

I’ve been learning about quantum computing for a while, but up to now all my work has been on paper. I thought it would be useful to build a quantum computer emulator that will run on a classical computer (i.e. what we call “a computer”). I’ve now got my first quantum algorithm working.

First, a few words about what I’ve done here. I’ve written the code in C, because it’s a language I’m very familiar with. Also, I have not used any of the mathematical libraries that are available for complex numbers, matrices, etc. That’s because I wanted to code this all from the ground up, doing everything myself. I find that’s the only way to truly understand what’s going on. It also has the benefit that if the code is ported to another platform where the libraries aren’t available, it’s in a better position to be translated. (I’m planning to build a QC emulator on FPGA later; some early experiments show that this will be a challenge, though definitely doable.) I’ve deliberately not used OO methods, so functions are separate entities, not part of classes. Note that there’s no error checking in the code, and no attempt has been made to optimize it. It is not designed to be run in a production environment; it is lab code.

I’ve been working from a couple of great books: Nielsen & Chuang’s Quantum Computation and Quantum Information (popularly known as “Mike & Ike”), and Yanofsky & Mannucci’s Quantum Computing for Computer Scientists. The first book is very comprehensive, and covers a lot of the theoretical background. The second book skips some of the detail, but is much easier to get into if you have a computer science background. It has the best explanation of the tensor product that I have ever seen.

The code I’ve written provides a bunch of types and functions that make it easy to build simple quantum algorithms. There’s a complex number type, and a matrix type which handles complex values. Functions are provided for the basic operations on the types, such as complex arithmetic, matrix multiplication, matrix inversion, conjugate transposition, and tensor product. There are also functions for generating matrices that will be useful, such as the Hadamard matrices, Identity, simple qubits, and superposition of states. Finally, there’s a measurement function which will “collapse” a state vector to a single measured value (using a random number generator to select the appropriate value).

I’ll put the full code at the end of this article, but let’s concentrate on solving a particular problem. For this first experiment, I’ll tackle Deutsch’s algorithm (as explained in Yanofsky chapter 6). This is a very simple problem in which we want to detect whether a function (which is unknown to us) is “balanced” or “constant”. (Briefly, balanced means that the output is different and unique for each input; constant means that the output is the same for each input. For example, if f(0)=1 and f(1)=0, then the function is balanced. If, however, f(0)=0 and f(1)=0, then the function is constant.)

I won’t go into the details of why this is an interesting problem. In brief, it’s obvious how one would go about testing the function on a classical computer by trying various inputs and looking at the outputs. In fact, for this simple example, a quantum computer doesn’t really bring much benefit, but it is a nice problem to start with. When we expand to larger problems, the quantum computer will win.

Deutsch algorithm

The circuit starts with two qubits on the left. The top qubit is set to state |0〉 and the bottom qubit is set to state |1〉. The intermediate states |ψ0〉…|ψ3〉 represent the state of the system combining both top and bottom lines at each step in the process.

So we have the input state: |ψ0〉 = |0〉⊗|1〉 = |01〉.

We then apply the Hadamard transform to each line: |ψ1〉 = (H⊗H)|ψ0〉.

Then we apply the “mystery” function: |ψ2〉 = Uf|ψ1〉.

Another Hadamard transform to the top line only this time: |ψ3〉 = (H⊗I)|ψ2〉.

(So in summary, we have: |ψ3〉 = (H⊗I)Uf(H⊗H)(|0〉⊗|1〉).)

And then we measure |ψ3〉!

Now there’s quite a bit of work to do in order to understand what this will actually tell us. Yanofsky explains this very well. In brief, when we measure |ψ3〉, if we get |0〉 for the top qubit then we know that the function is constant. If we get |1〉 for the top qubit then we know that the function is balanced.

Given all the functions that have been defined in the code, it is quite simple to set up the problem:

/*
 * |x>|y> -> |x>|y XOR f(x)>
 * Balanced function: f(0) = 1, f(1) = 0.
 */
matrix* function_bal = mat_alloc(4,4);
function_bal->data[0][1] = c_num(1,0);
function_bal->data[1][0] = c_num(1,0);
function_bal->data[2][2] = c_num(1,0);
function_bal->data[3][3] = c_num(1,0);

This code gives us the matrix representing a balanced function.

Then we simply chain the operations together:

matrix* psi0 = mat_tensor(qubit(0),qubit(1));

matrix* hadamard2 = mat_hadamard(4);

matrix* psi1 = mat_mul(hadamard2,psi0);

matrix* psi2 = mat_mul(function_bal,psi1);

matrix* hadamard_identity = mat_tensor(mat_hadamard(2),mat_identity(2));

matrix* psi3 = mat_mul(hadamard_identity,psi2);

measure(psi3);

And it’s as simple as that!

When the code is executed, the measurement of |ψ3〉 gives us a result of |10〉. The first qubit is |1〉, showing that the function is balanced, as we know from the definition. (It might alternatively have measured |11〉, as the measurement is random. The point is that the first qubit is |1〉 either way.)

If we set up a constant function instead:

/*
 * |x>|y> -> |x>|y XOR f(x)>
 * Constant function: f(0) = 0, f(1) = 0.
 */
matrix* function_con = mat_identity(4);

then we get the measurement of |ψ3〉 as |00〉 (or |01〉), with the first qubit being |0〉, showing that the function is constant.

So it works!

There’s a lot more to do. The next step is to code the Deutsch-Jozsa algorithm which performs a similar task, but with multiple qubits (and that’s where things get interesting). I also need to code the Quantum Fourier Transform (QFT) which is needed to perform more sophisticated algorithms, such as Shor’s Factoring Algorithm.

Anyway, here’s the code if you want to play with what we’ve got so far.

/*
 * Quantum Computer Emulator
 * 4dc5.com
 */

#include <stdio.h>
#include <stdlib.h>

// Complex number
typedef struct {
	float re;
	float im;
	} complex;

// Matrix
typedef struct {
	int rows;
	int cols;
	complex** data;
	} matrix;

// Create a complex number from real and imaginary parts
complex c_num(float re, float im) {
	complex num;
	num.re = re;
	num.im = im;
	return num;
}

// Add two complex numbers
complex c_add(complex a, complex b) {
	complex z;
	z.re = a.re+b.re;
	z.im = a.im+b.im;
	return z;
}

// Subtract two complex numbers
complex c_sub(complex a, complex b) {
	complex z;
	z.re = a.re-b.re;
	z.im = a.im-b.im;
	return z;
}

// Multiply two complex numbers
complex c_mul(complex a, complex b) {
	complex z;
	z.re = a.re*b.re - a.im*b.im;
	z.im = a.re*b.im + a.im*b.re;
	return z;
}

// Divide two complex numbers
complex c_div(complex a, complex b) {
	complex z;
	z.re = (a.re * b.re + a.im * b.im) / (b.re * b.re + b.im * b.im);
	z.im = (a.im * b.re - a.re * b.im) / (b.re * b.re + b.im * b.im);
	return z;
}

// Print complex number
void c_print(complex a) {
	if (a.re!=0 && a.im>0) printf("%.1f+%.1fi",a.re,a.im);
	else if (a.re!=0 && a.im<0) printf("%.1f-%.1fi",a.re,-a.im);
	else if (a.re==0 && a.im>0) printf("%.1fi",a.im);
	else if (a.re==0 && a.im<0) printf("-%.1fi",-a.im);
	else printf("%.1f",a.re);
}

// Create a matrix of complex numbers (initialised to 0), m rows, n columns
matrix* mat_alloc(int m,int n) {
	matrix* mat = malloc(sizeof(matrix));
	mat->rows = m;
	mat->cols = n;
	mat->data = (complex**)malloc(sizeof(complex*)*m);
	for (int i=0;i<m;i++) {
		mat->data[i] = (complex*)malloc(sizeof(complex)*n);
		for (int j=0;j<n;j++) {
			mat->data[i][j] = c_num(0,0);
		}
	}
	return mat;
}

// Create Identity matrix of order m
matrix* mat_identity(int m) {
	matrix* mat = mat_alloc(m,m);
	for (int i=0;i<m;i++) {
		mat->data[i][i].re = 1.0;
	}
	return mat;
}

// Multiply a complex scalar by a matrix
matrix* mat_scalarmul(complex scalar, matrix* matold) {
	matrix* matnew = mat_alloc(matold->rows,matold->cols);
	for (int i=0;i<matold->rows;i++) {
		for (int j=0;j<matold->cols;j++) {
			matnew->data[i][j] = c_mul(scalar,matold->data[i][j]);
		}
	}
	return matnew;
}

// Perform tensor product of two matrices
matrix* mat_tensor(matrix* mata, matrix* matb) {
	matrix* mat = mat_alloc(mata->rows*matb->rows,mata->cols*matb->cols);
	for (int ai=0;ai<mata->rows;ai++) {
		for (int bi=0;bi<matb->rows;bi++) {
			for (int aj=0;aj<mata->cols;aj++) {
				for (int bj=0;bj<matb->cols;bj++) {
					int row = ai*matb->rows+bi;
					int col = aj*matb->cols+bj;
					mat->data[row][col] = c_mul(mata->data[ai][aj],matb->data[bi][bj]);
				}
			}
		}
	}
	return mat;
}

// Multiply two matrices
matrix* mat_mul(matrix* mata, matrix* matb) {
	matrix* mat = 0;
	if (mata->cols==matb->rows) {
		mat = mat_alloc(mata->rows,matb->cols);
		for (int ai=0;ai<mata->rows;ai++) {
			for (int bj=0;bj<matb->cols;bj++) {
				complex cellsum = c_num(0,0);
				for (int aj=0;aj<mata->cols;aj++) {
					cellsum = c_add(cellsum,c_mul(mata->data[ai][aj],matb->data[aj][bj]));
				}
				mat->data[ai][bj] = cellsum;
			}
		}
	}
	return mat;
}

// Free the allocated memory of a matrix (and its data)
void mat_free(matrix* mat) {
	for (int i=0;i<mat->rows;i++) {
		free(mat->data[i]);
	}
	free(mat);
}

/*
 * Create a Hadamard matrix of order m.
 * This code only supports m ∈ {powers of 2}. Finding Hadamard matrices for other orders can be non-trivial.
 */
matrix* mat_hadamard(int m) {
	matrix* mat = 0;
	// If we want an order 1 Hadamard matrix, that's easy. It's [1].
	if (m==1) {
		mat = mat_alloc(1,1);
		mat->data[0][0].re = 1.0;		
	}
	else if (m%2==0) {
		mat = mat_alloc(m,m);
		// Find the Hadamard matrix that's half the order of this one
		matrix* mat_subhad = mat_hadamard(m/2);
		// Now copy that smaller Hadamard matrix into each quadrant of this one, but multiply the one in the bottom-right by -1.
		for (int i=0;i<m;i++) {
			for (int j=0;j<m;j++) {
				if (i<m/2 || j<m/2) mat->data[i][j] = mat_subhad->data[i%(m/2)][j%(m/2)];
				else mat->data[i][j].re = -mat_subhad->data[i%(m/2)][j%(m/2)].re; // Bottom-right				
			}
		}
		mat_free(mat_subhad);
	}
	return mat;
}

// Find the transpose of a matrix
matrix* mat_transpose(matrix* matold) {
	matrix* matnew = mat_alloc(matold->cols,matold->rows);
	for (int i=0;i<matold->rows;i++) {
		for (int j=0;j<matold->cols;j++) {
			matnew->data[j][i] = matold->data[i][j];
		}
	}
	return matnew;
}

// Find the conjugate transpose of a matrix
matrix* mat_conjtranspose(matrix* matold) {
	matrix* matnew = mat_alloc(matold->cols,matold->rows);
	for (int i=0;i<matold->rows;i++) {
		for (int j=0;j<matold->cols;j++) {
			matnew->data[j][i].re = matold->data[i][j].re;
			matnew->data[j][i].im = -matold->data[i][j].im;
		}
	}
	return matnew;
}

/*
 * Invert matrix using the Gauss-Jordan elimination method.
 */
matrix* mat_invert(matrix* matold) {
	matrix* matnew = 0;
	int i,j,k,m;
	complex ratio,a;
	if (matold->rows==matold->cols) {
		m = matold->rows;
		matrix* mataug = mat_alloc(m,m*2); // Augmented matrix
		// Copy the original matrix to the left-hand side
		for (i=0;i<m;i++) {
			for (j=0;j<m;j++) {
				mataug->data[i][j] = matold->data[i][j];
			}
		}
		// Create the identity matrix on the right-hand side
		for (i=0;i<m;i++) {
			mataug->data[i][m+i] = c_num(1,0);
		}
		// Now perform Gauss-Jordan elimination to find the reduced echelon form
		for (i=0;i<m;i++) {
	        for (j=0;j<m;j++) {
	            if (i!=j) {
	                ratio = c_div(mataug->data[j][i],mataug->data[i][i]);
	                for (k=0;k<2*m;k++) {
	                    mataug->data[j][k] = c_sub(mataug->data[j][k],c_mul(ratio,mataug->data[i][k]));
	                }
	            }
	        }
	    }
	    for (i=0;i<m;i++) {
	        a = mataug->data[i][i];
	        for (j=0;j<2*m;j++) {
	            mataug->data[i][j] = c_div(mataug->data[i][j],a);
	        }
	    }
		matnew = mat_alloc(m,m);
		// Copy just the right-hand side, which is the inverted matrix
		for (i=0;i<m;i++) {
			for (j=0;j<m;j++) {
				matnew->data[i][j] = mataug->data[i][m+j];
			}
		}
		mat_free(mataug);	
	}
	return matnew;
}

// Print a matrix
void mat_print(matrix* mat) {
	for (int i=0;i<mat->rows;i++) {
		for (int j=0;j<mat->cols;j++) {
			c_print(mat->data[i][j]); printf(" ");
		}
		printf("\n");
	}
}

/*
 * Generate a vector (single column matrix) for a qubit.
 * Qubit 0 => |0> = [1 0]T
 * Qubit 1 => |1> = [0 1]T
 */
matrix* qubit(int qubit) {
	matrix* mat = mat_alloc(2,1);
	if (qubit == 0) mat->data[0][0].re = 1;
	else mat->data[1][0].re = 1;
	return mat;
}

/*
 * Generate a vector (single column matrix) of superposed states.
 * Each row has the value 1.
 * e.g. 2 qubits => [1 1 1 1]T
 */
matrix* superpose(int num_qubits) {
	matrix* mat = mat_alloc(1<<num_qubits,1);
	for (int i=0;i<mat->rows;i++) {
		mat->data[i][0] = c_num(1,0);
	}
	return mat;
}

/*
 * Perform measurement to 'collapse' the vector to one value.
 * Prints the resulting vector.
 */
void measure(matrix* mat) {
	int i,row,mask;
	float sum = 0, size, ran;
	row = mat->rows-1; // Default to the last row
	// Find the magnitude of the vector (by summing the square of each value)
	sum = 0; // Cumulative sum
	for (i=0;i<mat->rows;i++) {
		sum = sum + c_mul(mat->data[i][0],mat->data[i][0]).re;
	}
	size = sum;
	// Pick a random number between 0 and 1
	ran = 1.0*rand()/RAND_MAX;
	// Use this random value to pick a row according to the size of each value
	sum = 0;
	for (i=0;i<mat->rows;i++) {
		sum = sum + c_mul(mat->data[i][0],mat->data[i][0]).re/size;
		// If the cumulative sum is bigger than the random number, then we've got our measurement
		if (sum>ran) {
			row = i;
			break;
		}
	}
	// Now print the state for this row as a ket
	printf("|");
	mask = mat->rows>>1;
	while (mask>0) {
		if ((mask&row) >0) printf("1");
		else printf("0");
		mask = mask>>1;
	}	
	printf(">\n");
}

int main(void)
{
	srand(1); // Fixed so it's repeatable. Change to e.g. srand(time(NULL)) to make it more 'random'.
	/*
	 * |x>|y> -> |x>|y XOR f(x)>
	 * Balanced function: f(0) = 1, f(1) = 0.
	 */
	matrix* function_bal = mat_alloc(4,4);
	function_bal->data[0][1] = c_num(1,0);
	function_bal->data[1][0] = c_num(1,0);
	function_bal->data[2][2] = c_num(1,0);
	function_bal->data[3][3] = c_num(1,0);
	
	/*
	 * |x>|y> -> |x>|y XOR f(x)>
	 * Constant function: f(0) = 0, f(1) = 0.
	 */
	matrix* function_con = mat_identity(4);
	
	matrix* psi0 = mat_tensor(qubit(0),qubit(1));
	
	matrix* hadamard2 = mat_hadamard(4);
	
	matrix* psi1 = mat_mul(hadamard2,psi0);
	
	matrix* psi2 = mat_mul(function_con,psi1);
	
	matrix* hadamard_identity = mat_tensor(mat_hadamard(2),mat_identity(2));
	
	matrix* psi3 = mat_mul(hadamard_identity,psi2);
	
	measure(psi3);
	
	return 0;
}
This entry was posted in Uncategorized. Bookmark the permalink.

4 Responses to Quantum Computer Emulator, part 1

  1. Intriguing – will download and play in a minute.

    It’s easy to get carried away with overly-complex OO functionality in C++, but I reckon some stuff would make this more readable rather than less; for example, encapsulating matrix and complex arithmetic behind names with code hidden elsewhere.

    I appreciate the desire to do it all ground-up, though: I’m intrigued by the OO capabilities discussed in 21st Century C by Ben Klemens (O’Reilly), though: it almost sounds like an attempt to re-define an OO language under the (fair) premise that C++ got carried away with itself and went totally nuts.

    • 4DC5 says:

      Thanks for pointing out the Klemens book. I hadn’t seen that one. I have a Safari subscription, so I can read it there. Awesome!

  2. rrtucci says:

    You may at some point require the use of linear algebra, like SVD. The linear algebra part of this task has been done already in Lapack, (C port called Clapack). Writing efficient linear algebra is highly nontrivial. A battalion of experts have spent their entire careers improving Lapack (and its close cousin, Matlab). (Matlab runs slow compared to C and it’s not cheap. Its open source version Octave is even slower but it’s free)

    • 4DC5 says:

      Thanks for the pointers. I’ve used both Matlab and Octave, but not for this kind of task.

      I don’t mind writing inefficient code when I’m trying things out, because it helps me to build my intuition. But it’s good to know what the good tools are when making it work properly!

      I’ll take a look at Clapack.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s