How does D-Wave system design their processor core?

Answer by Hadayat Seddiqi:

Good question.

There are two parts to this. The first is the actual hardware stuff they use for building qubits, controlling them, controlling and maintaining the entanglement, noise control (thermal and magnetic) and cooling. These are things I don't know much about, but here are some quick references:

And if Matthew Saul Leifer or Adam Sears want to answer that would be great (they both seem to understand the physics of quantum devices better).

The other side comes from the abstract. Ignoring the exact implementation details, what we want is a quantum annealing chip that can add some quantum noise H_x to a particular Hamiltonian H_z. H_z encodes the problem you wish to solve in the z-spin basis of the entire Hamiltonian–in other words, there are a bunch of particles living in some graph and the particles have up or down spin (well, quantum mechanically they are in a certain superposition of the two). We want a total Hamiltonian to look like

H_{total} = H_z + \Gamma(t) H_x

where notice that \Gamma is time-dependent. Typically \Gamma scales H_x to have a much larger contribution than H_z at first, dying off to zero over time (linearly, quadratically, whatever you want, this choice is called the annealing schedule).

If you have an understanding of simulated annealing, there's an analogous picture for quantum annealing. Instead of gradually lowering the temperature (i.e. thermal noise), you try to replace thermal noise with quantum noise. Said in a different way, we allow a particle searching an energy landscape to be able to tunnel through barriers if the tunneling energy is high. Just like simulated annealing, you start out with a high tunneling energy and gradually decrease that to zero until all that is left is the part of the total energy equation (read: Hamiltonian) that specifies the energy landscape being optimized over. Interestingly, there are ideas that say thermal + quantum noise could be a killer combination if they're scaled right.

The particles/nodes are basically the qubits, and in the D-Wave processors they're SQUIDs, and the [weighted] edges between them represent coupling or entanglement (the coupling strength is determined by other SQUIDs called bias qubits). The qubit configurations is called the hardware graph, which in a chip like this would be isomorphic to an Ising spin-glass. Note that in an arbitrary spin-glass you can set the couplings to be anything you want, but in practice you must specify a particular hardware graph. This is the other aspect of the design of the chip. In equations,

H_z = \sum_{ij} Q_{ij} z_i z_j
and all you care about here is Q, which is an adjacency matrix for the graph of the problem you want to optimize for (the off-diagonals specify the couplings, and the diagonals are the local field bias on each node, i.e. some bias for which way the spin should be, positive, negative, and how much). The z parts of H_z are just the spins of the qubits, and there is a specific configuration of ups and downs (think zeros or ones, if you like) that minimize the energy of this graph, which is defined by Q like we just noted. The one problem that remains is that Q may not be very compatible with the actual adjacency matrix representing the physical qubits and their couplings with each other.

So we must think what kind of graph we want. You cannot just build a complete graph, that will fail on a physical scale at some point and would be far too expensive. Instead, you come up with something that's relatively sparse that will also allow you to translate graphs representing problems to graphs that also represent the problem but can be morphed and molded into the hardware graphs representation. In other words, it's the problem of minor embedding, which itself is NP-Complete. That's not necessarily spelling out doom, we just need to be careful what the hardware graph construction will look like.

More explicitly, what we want out of the hardware graph is some level of sparsity, which leads to flexibility and scalability. It needs to be sparse so that it is inexpensive to fabricate, it needs to have good structure so that it is flexible in allowing many different problems, and it needs to scale well. Graph flexibility isn't a well-defined term, what I mean is that during the process of embedding, sometimes you must expand one node in the problem graph into two, three, or more terms in the hardware graph so that k physical qubits represent one logical qubit. This should be minimized as much as possible. Scaling of course means that as we create larger machines, we should be able to expand the graph structure to incorporate more nodes in a sensible way without having to come up with completely new tools (e.g. compilers, which is what a program embedding a problem graph into the hardware graph would be for this computer) each time a new chip is designed.

Presumably D-Wave Systems got together a lot of smart graph theorists and IC designers and thought really hard about these problems. What they came up with was the Chimera graph:

(the description on Alex Selby's page says: Collapsed view of the Chimera graph of order 8 where each blob represents four vertices, each diagonal line represents a K_{4,4} with all four vertices connected to each other by 16 edges, and each horizontal and vertical line segment represents four parallel edges connecting each vertex to its corresponding vertex). The above graph may not seem so clear since it doesn't show some detail, but essentially each diagonal line represents a graph that looks like this:

where each black line or dot represents a physical qubit and the blue line or dot represents a coupling between two qubits. The first graph shows how these so-called 8 qubit unit-cells can be put together to create a 128-qubit chip (and in theory you could just keep plugging together unit cells and create a chip with any number of qubits (mod 8)). To embed K_m (fully-connected graph with m nodes) into the Chimera graph requires m(m-1)/2 qubits). Not a lot of work has been done on this particular graph that I've seen, though the biggest chip right now (March 2014) is 512-qubits and there is still a huge debate over how well this chip actually performs. I'll leave two references specifically about Chimera graphs here as well:

(please let me know if this did or didn't make sense, and what parts I should expound upon)

View Answer on Quora

Advertisements
Standard

2 thoughts on “How does D-Wave system design their processor core?

    • Hi. Sorry for that. I don’t really keep up much with this blog anymore so whatever choice for the font was just the default one. You can click on the Quora link to see the actual answer on the website.

      Best wishes.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s