Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Official Google Research Blog: Machine Learning with Quantum Algorithms (googleresearch.blogspot.com)
71 points by Anon84 on Dec 8, 2009 | hide | past | favorite | 24 comments


The more interesting part of the story is that they're collaborating with D-Wave to try and get these algorithms running on their hardware. Should be interesting to see what comes out of this, considering the amount of skepticism that has been surrounding D-Wave.


When I was working at Google in 2007, D-Wave's CEO came by and gave a talk about their chips. It was interesting and he was immediately interested in getting Google on board with access to resources that most of the world doesn't have.

I remember at the time holding one of the first quantum processors in the world and thinking how this is going to change history. Even though at the time it only did a single simple task, it was clear that big things were coming. One thing he mentioned, and I forget the specifics so I can't really elaborate, but the chips are designed to be crippled in a certain way that doesn't allow them to implement Shor's algorithm. Apparently this was at the request of the NSA. I've no idea if that is still the case, but worth keeping in mind.


I had to look it up so hopefully this is helpful to someone else: http://en.wikipedia.org/wiki/Shor%27s_algorithm


I wonder if NSA already has their own quantum computers which can implement Shor's algorithm?


> So on average it will take you 500,000 peeks to find the ball. Now a quantum computer can perform such a search looking only into 1000 drawers. This mind boggling feat is known as Grover’s algorithm.

This is a very clear explanation of the benefits of quantum computers. But I don't grasp how this would work yet.

I read the Wikipedia page, and maybe I just have to struggle more with it, but it would be great if there was a simpler explanation.


As Wikipedia says, in a quantum computer, you have a quantum register (a bunch of entangled qubits) that holds the result of your computation. The contents of this register can be visualized as a high-dimensional vector of length 1 -- think of this as a line with one endpoint at the origin and the other restricted to the radius-1 hypersphere around the origin. Each axis of the space that this sphere lives in corresponds to one classical value of the register -- in an 8-qubit register, one axis corresponds to 00000000, another corresponds to 00000001, etc.

The real problem you solve with Grover's algorithm is this: you have a black box (usually called an "Oracle operator", but I think the term "Oracle" in this case is misleading) that negates the value of your register along exactly one axis (i.e., if the coordinate on the 00010111 axis is .2 - .1i, it becomes -.2+.1i). You want to figure out which of the 2^n axes in the n-qubit register your black box negates.

Unfortunately, you don't have the luxury of picking the vector that has a coordinate of 1/sqrt(2^n) in each direction, and observing what the black box does to that after a single application, since when you observe your quantum register you always observe a classical state. In contrast, one solution that would work would be to solve the problem the classical-computing way: march through each of the 2^n axes until you find the one you're looking for.

Grover's algorithm gives you a better way. What Grover's algorithm does is repeatedly apply an operation that nudges your register towards the desired state -- basically applying a small rotation. After some number of iterations, the vector representing your register should be pointing in almost the same direction as the vector representing the state you're searching for. Then you can measure the contents of the register and, with high probability, get the answer you were searching for. It turns out you only need O(sqrt(2^n)) queries (rotations), so it's a substantial improvement over the classical case.

I've probably oversimplified a few things, but hopefully this makes things clearer than they were before.



This must be how old mainframe programmers felt reading articles about PCs in the early 70s.


Machine learning with quantum computers is so much more than that: quantum computation is provably more efficient at solving a large class of problems than classical computation.


Unfortunately, there are also large classes of interesting problems which are intractable even with quantum computation.


That's a good thing, it means I'll still have a job 10 years from now!


The problem is that even if you can compute everything, you still need to read the result in an expeditious manner.. say, less than the age of the universe. The laws of physics can't be fooled. I will pay attention to quantum computation when the people doing it care to learn serious physics...


I'm betting that someone will prove a TANSTAAFL theorem for Quantum computing. In other words, you might be able to factor in polynomial steps, but each step will take time proportional to the size of your Hilbert space.


I'm betting that the Copenhagen interpretation of the Quantum mechanics formalism is false. There is no empirical basis for it, and it is a fundamentally pessimistic view of physics (that we don't know certain things because we simply can't know them.) I don't think the superpositions in the quantum mechanical formalism reflect any objective reality, their just a standard statistical reflection of our ignorance of the system's total state. But the notion of quantum computation crucially depends on these superpositions. To me the vast efficiency gains claimed for quantum computation are actually a reductio ad adsurdum for the Copenhagen interpretation accordiing objective reality to these superpositions.

That people are apparently attempting serious implementations of quantum computation is very exciting to me, because it means these ideas are going to be severely tested soon.


Bell's theorem demands that you give up locality or realism. Either is fine; historically speaking it's just been easier to give up realism. Of course physicists found a way to keep both (by splitting the observer up), & MWI's increasing popularity reflects this. In my view it's useless to argue about interpretations. Last I checked they had the same predictions.


  In my view it's useless to argue about interpretations. Last I checked they had the same predictions.
Not true at all. If the superposition in the QM formalism merely represents our ignorance about the system's state, rather than an objective reality, then QC loses the massive parallelism which is supposed to make it so much more efficient than classical computation.


Well nobody argues the wave function represents our ignorance about the system's state. It's certainly not in the Copenhagen interpretation anyway. The wave function sacrifices realism.


In the Copenhagen interpretation, the wave function represents our ignorance of what the system's state will become when observed, but it's taken as an accurate description of the system's actual state prior to observation. That is the point of the "Schroedinger's cat" paradox: that under the Copenhagen interpretation, the cat actually is in a superposition of alive and dead states, until someone looks in the box. Quantum computation crucially depends on the objective reality of this superposition. It seems likely to me that the superposition in the formalism actually just represents our ignorance of the system's state, not the state itself.


> It seems likely to me that the superposition in the formalism actually just represents our ignorance of the system's state, not the state itself.

Again, nobody argues that. It's possible to experimentally distinguish between the two situations (Bell's theorem). This is a bit of a strawman.

Besides the wave function is a complex vector. It can't represent ignorance. You have to take |ψ|2 to get a real value.


Nice to see that experience with LISP is a desired qualification for the job as Experimental Physicist at DWave.


Interesting, I did not realize that usable quantum chips now exist.


I have a question about D-Wave. I've been trying to follow their progress in building multi-qubit adiabatic quantum computers for years now, and I see Google is working with them now too. But this blog entry states that it's hard to work out if D-Wave's hardware is really a quantum computer. Why is it so hard? I mean after all these years they still can't tell if it's actually working?


To some extent, everything is a quantum computer; the problem is entering in the program and how accurately you can read out the answer. If I understand correctly, the scientific advance reported is just doing the first part.


talk about premature optmimization...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: