IIUC, complex numbers are a number system that supports rotations -- one representation is as an angle and a magnitude. As such they work well at describing systems that have rotational components. This makes them useful for working with waves like in QM (light, etc.) and Fourier transformations/analysis (sine waves) which is why they are used in QM.
If you exclude non-real operations and states you are removing part of the system such that it becomes impossible to work with certain cases -- like handling non-real roots of ax^2 + bx + c polynomials.
It is possible to represent complex numbers as 2x2 matrices as those can encode 2D rotations. With the matrix formulation you are not dealing with imaginary numbers -- or you are, but they are not encoded with i = sqrt(-1) but as a 45deg rotation. IIRC, there is a formulation of Dirac's QED (Quantum ElectroDynamics) using matrices.
Even simpler, complex numbers are really 2D vectors with addition and multiplication defined: a field. There's nothing "imaginary" about that second dimension, very frustrating to see them defined that way because it makes people think of it as an "escape hatch" out of real numbers. When you're working with complex numbers, you are working with a different system: `5 + 0i` is still a complex number because it's really `(5, 0)`.
But working with complex numbers I hardly if ever write (a, b) for a+ib, while I use the "escape hatches" all the time. They solve equations that have no real solution, they give me paths from x=-1 to x=1 that don't cross the origin, etc. There's only so much to learn about C as a vector space, while the theory tying it to R (and even N) is very deep.
Thing is, there's no such thing as an escape hatch. Either you are working in the reals, or you are working in the complex plane. They don't "solve equations that have no real solution", that equation is either a real number equation or a complex number equation, not both. If you work in the complex plane, that is a different equation describing a different space! It just looks the same in standard notation.
If you don't realize this, then you can draw conclusions that don't make sense in the space you're working with. Take a simple equation like y = -x^2 - 5, representing a thrown ball's trajectory. It never crosses zero, there are no solutions. You can't "pop into the complex numbers and find a solution" because the thing it represents is confined to the reals.
So if you find yourself reaching for complex numbers, you have to understand that the thing you are working with is no longer one-dimensional, even if that second dimension collapses back to 0 at the end.
My mental model is that complex numbers are the first of the basic number systems that no longer has a total ordering. That alone is super useful for it.
Quantum is an odd one, as the name indicates that it deals in quantums. Minimum values that can't be divided. The difficult parts seems more to be in systems that have a probability space more than an analytical model that describes them. Which, fair, it is not a number system.
The loss of ordering is what makes complex numbers unique and useful for describing systems like rotations and probabilities.
Classical probability works with real numbers (probabilities between 0 and 1). Quantum probability involves amplitudes represented by complex numbers. These amplitudes can wave-interfere with each other, leading to superposition and entanglement
The phase space formulation of QM still only uses REAL-valued probabilities, but outside the interval [0,1]. I'm not sure I agree with the rest of your comment either.
A function (which is an isomorphism) from complex numbers a+bi to matrices is a+bi |-> [[a,-b],[b,a]] where the matrix is listed by rows. So i is sent to the matrix R with a 0 in the top left, 1 in the bottom left, 0 in the bottom right and a -1 in the top right. R is a 90 degree rotation, you can check that it sends the unit vector [1,0] on the x-axis to [0,1], and the unit vector [0,1] on the y-axis to [-1,0].
I find that cases like this represent one of the biggest problems in today’s research: once someone falsifies something, an entire branch of research gets cut off completely as nobody wants to pursue that path anymore, understandably. But if the “proof” is in fact wrong, then you actually just hid a big part of the research surface to everybody. And usually that’s also where progress is made: when, despite proof, research is pursued because of a gut feeling. Stay skeptic!
Nobody takes what is published at face value [1], the researchers read and reproduce the result. The reproduction is never published. If it's a positive result, the idea is to add a tweak and get a free paper, or combine with another technique and get a free paper, or copy the same idea to another area and get a free paper, or ... For negative results, it's more difficult, but if someone is getting a promising results, they will not just drop whatever they are doing because some random said it's impossible.
It's also a matter of reputation. Everyone knows everyone and have read a few of the previous papers (or papers of the advisor/coworker/whatever). If the previous papers were good, it's a good signal to take a look at the new result. If the previous papers were dubious, you skim it in case there is something interesting, but may just ignore it.
[1] Perhaps the exception are medical trials, but they have a lot of rules and paperwork to avoid lying, error, misrepresentations and other nasty stuff. Anyway, after reading a lot of ivermectin preprints during peak pandemic, I'm not so sure.
Sure, but if the result is convincing enough, it might linger for a long time before someone finds a corner case where it doesn’t hold. For these reasons I think it would be better to have a different approach.
What was wrong with the proof in this case? The paper explicitly states and acknowledges the issue raised by this article before the author was aware of it. The author of the article just contends that it is an experimental issue to set up unentangled initial states which are required for the experiment, and indeed someone who was going to perform the experiment needs to convincing demonstrate the assumptions are met.
The author even admits this "is better than doing no test at all".
Nothing, except the perception of what was said and what was actually said. (The same happend to Bells inequality actually)
“ (…) you can just mimic the behavior of complex numbers using pairs of real numbers (and appropriately tweaked definitions of operations).
(…) What Renou et al are actually claiming is that if you start with quantum mechanics, and then remove all operations and states involving non-real numbers, and then try to emulate what was lost using what remains, you will fail in an experimentally detectable way”
Meaning it’s actually totally possible to only use reals to encode the complex Numbers, but not to also remove all operators which do the same things as the complex numbers would.
Quantum computing research feels like one of those things whose greatest effort would likely be classified research. In fact, you could argue the article in the OP looks like well-poisoning based on the author's conclusions.
So, the point of the entire article is that he didn’t like the title of paper he’s criticizing?
The post implies flaws in the original paper, then at the very end seems to concede it was technically fine just should’ve been up front about entanglement.
The post should be edited to be more upfront about what he realized after writing it.
So this is obviously an incredibly technical post. And I can't claim to understand half of it. But I do have one question that may or may not be intelligent. Given that preexisting entanglement is the issue, does that entanglement get "used up" or not? Will it be possible to drain it all by testing for long enough?
Frankly I am so tired of this whole branch of research where people try to be foundational about "quantum theory" but at the same time boil it down to qubits, gates, bell tests and, well, two-by-two matrices.
Here is my viewpoint, which somehow some people find controversial: quantum theory is first and foremost a description of individual particles. To describe their time evolution, we use the Schrodinger equation:
i d_t Psi = H Psi
What is that "i" there? Oh right, the imaginary unit. So... quantum theory uses complex numbers.
Now you are free to search for another theory without the "i", and perhaps even find something that is somehow mathematically consistent. But that theory either describes experiments just as well as ordinary quantum theory, in which case it is physically equivalent and of no advantage (except to those with strong allergies to complex numbers), or it does not, and then it is wrong.
Of course the last logical possibility is that your theory might do better than quantum theory... but that is the dream only of those who do not known quantum field theory.
There is really nothing to the appearance of complex numbers in QM. In QM we must design wave functions which do the double duty of representing the probability of measurement outcomes AND capture the symmetries implicit in the system related to the fact that there are degrees of freedom between preparation of a state and measurement (for example, we may rotate our detector any way we wish before we make a measurement of a particle in a given prepared spin state). To accomplish this we need some number-like objects to denote our wave function in that square to real numbers but have enough structure to represent (in this case) the rotations.
As you venture further into the universe of QFT you find that you need even more exotic number like objects like spinors with their own peculiar structures, but the essence is the same: they must serve the purpose of representing probabilities and symmetries. The complex numbers in QM mean nothing at all except in that they serve these purposes.
If we wish to speak informally and wave our hands a bit we can say that it isn't so surprising that we find the complex numbers and related number like objects because the complex numbers are a promise to square something at a later date and recover a real number, which is what we need to satisfy the requirement to represent probabilities.
In fact, we can formulate classical probabilistic mechanics with complex numbers (the Koopman von Neuman operator theory) and again, they appear because we want to operate on objects living in a nice Hilbert space which also square to probabilities. In only took me 20 years to understand this, so I can sympathize with confusion.
It's a long time since I read it, but there's a book called "The Structure and Interpretation of Quantum Mechanics" [1] by R. I. G. Hughes. The "Structure" part of it begins by building up most of the mathematical framework (including use of complex numbers, Hilbert spaces, operators, etc), motivated only by the desire to build a physical theory that is probabilistic in nature. It then shows how you can add one extra ingredient that turns the framework into that used for quantum mechanics [2]. I assume that everything discussed up to that point applies equally to Koopman-von Neumann.
It's a really nice book, very self-contained. I think anyone with a basic mathematical education (A-Level or equivalent) could get through it without having to read other things to acquire prerequisites, though they should be prepared to think quite hard.
1. The resemblance to the titles of Gerald Jay Sussman's "Structure and Interpretation" books appears to be coincidental. The title is meant literally: the book is split into two sections, one on the (mathematical) structure of QM and one on its (philosophical) interpretation. There are no similarities in style, pedagogy or subject matter to Sussmann's books and no use of, or reference to, programming. The author was a professor of philosophy at the University of South Carolina.
2. He actually lists a collection of alternatives for that extra ingredient, any one of which has the same effect when added.
The "i" is there because it is a convenient way in our system of mathematics to write out such an equation, but that really comes from the fact that complex numbers have two dimensionality. Our best understanding of the universe demands that higher dimensionality, not necessarily the imaginary-ness.
Yes a different mathematical formulation may be rewritten into this imaginary form, and thus is mathematically equivalent. But by the same logic a heliocentric system of elliptical orbits is mathematically equivalent to a geocentric system of epicycles. From one perspective there is a certain deeper meaning there - the universe has no absolute reference frame; but if you view your cosmos in terms of epicycles its very difficult to develop an understanding of what drives those epicycles, namely gravity. Likewise thinking about quantum mechanics in terms of of imaginary numbers may allow for accurate calculations, but nevertheless be an intellectual stumbling block for understanding why the universe is this way.
I personally have no issue with "imaginary" numbers having real physical meaning. Our inability to process the square root of negative 1 seems more like a limitation of our ape brains than the universe, and likewise for the majority of quantum weirdness. But in throwing up my hands saying the question can not be answered, I have guaranteed that I will never find the answer even if it does indeed exist.
The issue with epicycles is you need an infinite number of them to produce the actual orbits and with an infinite number of epicycles you can describe any shape. Thus it is as complex as the underlying data.
Quantum Mechanics on the other hand is incredibly constrained and therefore actually says something.
And pure ellipses as predicted by newtonian gravitation also don't line up with actual orbits perfectly. In both cases they are just models approximating reality, one of which happens to be more elegant. I don't know how anyone would be able to jump straight from epicycles to general relativity.
Quantum mechanics likewise is just an approximation of quantum field theories.
It’s not about elegance for the sake of it. The number of constants in a theory provides a meaningful point of comparison, especially if you need to increase them after an experiment.
Epicycles wasn't a theory, it was a model. It did not try to explain why the planets moved in the sky as they did, it only predicted where they'd be. Neither, for that matter, were copernican or keplerian mechanics theories. They too required unending tweaking because they also were only approximations of what was actually happening. For the first few centuries after heliocentrism was proposed, it gave worse results, and demanded more tweaking. What really won people over was that the phases of the moons of jupiter were accurately predicted by the model as well. The only way to achieve that result with epicycles was to rearrange everything to be mathematically equivalent to a heliocentric model.
You can reconstruct our modern understanding of the motion of the planets in the reference frame of a static earth and produce a mathematically equivalent path that draws out epicycles which predict the positions of planets with exactly the same accuracy as our regular formulations. You can rework the representation of the laws of gravity such that they spit out positions in this reference frame. It is an equally valid model of the cosmos, with exactly the same number of starting assumptions, it's just remarkably more complex.
It started from an actual theory based around the assumption that spherical motion was perfect. They needed 2 which did actually work for a while, eventually the most accurate model needed ~17 with people giving up on the underlying theory as the number of terms destroyed the initial idea.
Today with vastly more data and more accurate measurements you’d need effectively infinite terms, which makes it more obvious but you don’t need that level of absurdity to render judgment.
More complete astronomy data from telescopes showed that epicycles needed to be even more complicated then they were.
If we manage to find better tools for QM where we don't need to perform as much post-selection of experimental data, perhaps we'll also find a simpler model.
The phase space formulation of QM uses less complex numbers than the Schrodinger one: it models states using quasi-probability distributions, where the "probabilities" behave in all the usual ways except they can go negative. Interestingly, the classical limit of this (that is, when h goes to zero) still has negative probabilities in it.
Yes, the post is focusing on the overall effect of operations (unitaries) rather than their continuous trajectories (hamiltonians acting on system via Schrodinger equation) (analogous to working with impulses rather than forces).
To make the continuous case interesting as a compilation problem, you'd need some alternate formulation of the Schrodinger equation, e.g. based on the limit of small powers of unitaries rather than on the matrix exponential, so that deleting i didn't delete literally all processes. Or you could arbitrarily declare real-only hamiltonians are permitted, despite the Schrodinger equation saying "i". But that'd be kinda lame, imo.
I just started my PhD in distributed quantum computing, and my Masters was applying that framework to the QFT.
I came across a number of papers you authored in the process, as well as your blog. In particular, big fan of Kahanamoku-Meyer et al.'s optimistic QFT circuit.
IIUC, complex numbers are a number system that supports rotations -- one representation is as an angle and a magnitude. As such they work well at describing systems that have rotational components. This makes them useful for working with waves like in QM (light, etc.) and Fourier transformations/analysis (sine waves) which is why they are used in QM.
If you exclude non-real operations and states you are removing part of the system such that it becomes impossible to work with certain cases -- like handling non-real roots of ax^2 + bx + c polynomials.
It is possible to represent complex numbers as 2x2 matrices as those can encode 2D rotations. With the matrix formulation you are not dealing with imaginary numbers -- or you are, but they are not encoded with i = sqrt(-1) but as a 45deg rotation. IIRC, there is a formulation of Dirac's QED (Quantum ElectroDynamics) using matrices.
Even simpler, complex numbers are really 2D vectors with addition and multiplication defined: a field. There's nothing "imaginary" about that second dimension, very frustrating to see them defined that way because it makes people think of it as an "escape hatch" out of real numbers. When you're working with complex numbers, you are working with a different system: `5 + 0i` is still a complex number because it's really `(5, 0)`.
But working with complex numbers I hardly if ever write (a, b) for a+ib, while I use the "escape hatches" all the time. They solve equations that have no real solution, they give me paths from x=-1 to x=1 that don't cross the origin, etc. There's only so much to learn about C as a vector space, while the theory tying it to R (and even N) is very deep.
Thing is, there's no such thing as an escape hatch. Either you are working in the reals, or you are working in the complex plane. They don't "solve equations that have no real solution", that equation is either a real number equation or a complex number equation, not both. If you work in the complex plane, that is a different equation describing a different space! It just looks the same in standard notation.
If you don't realize this, then you can draw conclusions that don't make sense in the space you're working with. Take a simple equation like y = -x^2 - 5, representing a thrown ball's trajectory. It never crosses zero, there are no solutions. You can't "pop into the complex numbers and find a solution" because the thing it represents is confined to the reals.
So if you find yourself reaching for complex numbers, you have to understand that the thing you are working with is no longer one-dimensional, even if that second dimension collapses back to 0 at the end.
Today I learned that complex numbers can be represented by matrices... thanks! https://www.youtube.com/watch?v=HbUewIIpl6I
Yeah https://xkcd.com/2028/ hit the nail on the head on this.
My mental model is that complex numbers are the first of the basic number systems that no longer has a total ordering. That alone is super useful for it.
Quantum is an odd one, as the name indicates that it deals in quantums. Minimum values that can't be divided. The difficult parts seems more to be in systems that have a probability space more than an analytical model that describes them. Which, fair, it is not a number system.
The loss of ordering is what makes complex numbers unique and useful for describing systems like rotations and probabilities.
Classical probability works with real numbers (probabilities between 0 and 1). Quantum probability involves amplitudes represented by complex numbers. These amplitudes can wave-interfere with each other, leading to superposition and entanglement
The phase space formulation of QM still only uses REAL-valued probabilities, but outside the interval [0,1]. I'm not sure I agree with the rest of your comment either.
A function (which is an isomorphism) from complex numbers a+bi to matrices is a+bi |-> [[a,-b],[b,a]] where the matrix is listed by rows. So i is sent to the matrix R with a 0 in the top left, 1 in the bottom left, 0 in the bottom right and a -1 in the top right. R is a 90 degree rotation, you can check that it sends the unit vector [1,0] on the x-axis to [0,1], and the unit vector [0,1] on the y-axis to [-1,0].
I find that cases like this represent one of the biggest problems in today’s research: once someone falsifies something, an entire branch of research gets cut off completely as nobody wants to pursue that path anymore, understandably. But if the “proof” is in fact wrong, then you actually just hid a big part of the research surface to everybody. And usually that’s also where progress is made: when, despite proof, research is pursued because of a gut feeling. Stay skeptic!
Nobody takes what is published at face value [1], the researchers read and reproduce the result. The reproduction is never published. If it's a positive result, the idea is to add a tweak and get a free paper, or combine with another technique and get a free paper, or copy the same idea to another area and get a free paper, or ... For negative results, it's more difficult, but if someone is getting a promising results, they will not just drop whatever they are doing because some random said it's impossible.
It's also a matter of reputation. Everyone knows everyone and have read a few of the previous papers (or papers of the advisor/coworker/whatever). If the previous papers were good, it's a good signal to take a look at the new result. If the previous papers were dubious, you skim it in case there is something interesting, but may just ignore it.
[1] Perhaps the exception are medical trials, but they have a lot of rules and paperwork to avoid lying, error, misrepresentations and other nasty stuff. Anyway, after reading a lot of ivermectin preprints during peak pandemic, I'm not so sure.
Sure, but if the result is convincing enough, it might linger for a long time before someone finds a corner case where it doesn’t hold. For these reasons I think it would be better to have a different approach.
What was wrong with the proof in this case? The paper explicitly states and acknowledges the issue raised by this article before the author was aware of it. The author of the article just contends that it is an experimental issue to set up unentangled initial states which are required for the experiment, and indeed someone who was going to perform the experiment needs to convincing demonstrate the assumptions are met.
The author even admits this "is better than doing no test at all".
Nothing, except the perception of what was said and what was actually said. (The same happend to Bells inequality actually)
“ (…) you can just mimic the behavior of complex numbers using pairs of real numbers (and appropriately tweaked definitions of operations). (…) What Renou et al are actually claiming is that if you start with quantum mechanics, and then remove all operations and states involving non-real numbers, and then try to emulate what was lost using what remains, you will fail in an experimentally detectable way”
Meaning it’s actually totally possible to only use reals to encode the complex Numbers, but not to also remove all operators which do the same things as the complex numbers would.
Quantum computing research feels like one of those things whose greatest effort would likely be classified research. In fact, you could argue the article in the OP looks like well-poisoning based on the author's conclusions.
So, the point of the entire article is that he didn’t like the title of paper he’s criticizing?
The post implies flaws in the original paper, then at the very end seems to concede it was technically fine just should’ve been up front about entanglement.
The post should be edited to be more upfront about what he realized after writing it.
So this is obviously an incredibly technical post. And I can't claim to understand half of it. But I do have one question that may or may not be intelligent. Given that preexisting entanglement is the issue, does that entanglement get "used up" or not? Will it be possible to drain it all by testing for long enough?
No, the pre-shared states are never consumed. They are catalysts, not fuel.
>Not allowing the players to come into the game with entangled states is really, really strange.
I think i saw such a warning on a casino door in LV.
How did they check?
They just observe you, then you're good to come in.
They asked a friend.
Frankly I am so tired of this whole branch of research where people try to be foundational about "quantum theory" but at the same time boil it down to qubits, gates, bell tests and, well, two-by-two matrices.
Here is my viewpoint, which somehow some people find controversial: quantum theory is first and foremost a description of individual particles. To describe their time evolution, we use the Schrodinger equation:
i d_t Psi = H Psi
What is that "i" there? Oh right, the imaginary unit. So... quantum theory uses complex numbers.
Now you are free to search for another theory without the "i", and perhaps even find something that is somehow mathematically consistent. But that theory either describes experiments just as well as ordinary quantum theory, in which case it is physically equivalent and of no advantage (except to those with strong allergies to complex numbers), or it does not, and then it is wrong.
Of course the last logical possibility is that your theory might do better than quantum theory... but that is the dream only of those who do not known quantum field theory.
/rant, with apologies
There is really nothing to the appearance of complex numbers in QM. In QM we must design wave functions which do the double duty of representing the probability of measurement outcomes AND capture the symmetries implicit in the system related to the fact that there are degrees of freedom between preparation of a state and measurement (for example, we may rotate our detector any way we wish before we make a measurement of a particle in a given prepared spin state). To accomplish this we need some number-like objects to denote our wave function in that square to real numbers but have enough structure to represent (in this case) the rotations.
As you venture further into the universe of QFT you find that you need even more exotic number like objects like spinors with their own peculiar structures, but the essence is the same: they must serve the purpose of representing probabilities and symmetries. The complex numbers in QM mean nothing at all except in that they serve these purposes.
If we wish to speak informally and wave our hands a bit we can say that it isn't so surprising that we find the complex numbers and related number like objects because the complex numbers are a promise to square something at a later date and recover a real number, which is what we need to satisfy the requirement to represent probabilities.
In fact, we can formulate classical probabilistic mechanics with complex numbers (the Koopman von Neuman operator theory) and again, they appear because we want to operate on objects living in a nice Hilbert space which also square to probabilities. In only took me 20 years to understand this, so I can sympathize with confusion.
It's a long time since I read it, but there's a book called "The Structure and Interpretation of Quantum Mechanics" [1] by R. I. G. Hughes. The "Structure" part of it begins by building up most of the mathematical framework (including use of complex numbers, Hilbert spaces, operators, etc), motivated only by the desire to build a physical theory that is probabilistic in nature. It then shows how you can add one extra ingredient that turns the framework into that used for quantum mechanics [2]. I assume that everything discussed up to that point applies equally to Koopman-von Neumann.
It's a really nice book, very self-contained. I think anyone with a basic mathematical education (A-Level or equivalent) could get through it without having to read other things to acquire prerequisites, though they should be prepared to think quite hard.
1. The resemblance to the titles of Gerald Jay Sussman's "Structure and Interpretation" books appears to be coincidental. The title is meant literally: the book is split into two sections, one on the (mathematical) structure of QM and one on its (philosophical) interpretation. There are no similarities in style, pedagogy or subject matter to Sussmann's books and no use of, or reference to, programming. The author was a professor of philosophy at the University of South Carolina.
2. He actually lists a collection of alternatives for that extra ingredient, any one of which has the same effect when added.
It's nice to see this reference. I'm currently reading it and about halfway through (making my way through the chapter on Quantum Logic).
The discussion of the EPR paradox and the Kochen-Specker Theorem was really very illuminating.
It is one of my favorites.
The "i" is there because it is a convenient way in our system of mathematics to write out such an equation, but that really comes from the fact that complex numbers have two dimensionality. Our best understanding of the universe demands that higher dimensionality, not necessarily the imaginary-ness.
Yes a different mathematical formulation may be rewritten into this imaginary form, and thus is mathematically equivalent. But by the same logic a heliocentric system of elliptical orbits is mathematically equivalent to a geocentric system of epicycles. From one perspective there is a certain deeper meaning there - the universe has no absolute reference frame; but if you view your cosmos in terms of epicycles its very difficult to develop an understanding of what drives those epicycles, namely gravity. Likewise thinking about quantum mechanics in terms of of imaginary numbers may allow for accurate calculations, but nevertheless be an intellectual stumbling block for understanding why the universe is this way.
I personally have no issue with "imaginary" numbers having real physical meaning. Our inability to process the square root of negative 1 seems more like a limitation of our ape brains than the universe, and likewise for the majority of quantum weirdness. But in throwing up my hands saying the question can not be answered, I have guaranteed that I will never find the answer even if it does indeed exist.
The issue with epicycles is you need an infinite number of them to produce the actual orbits and with an infinite number of epicycles you can describe any shape. Thus it is as complex as the underlying data.
Quantum Mechanics on the other hand is incredibly constrained and therefore actually says something.
And pure ellipses as predicted by newtonian gravitation also don't line up with actual orbits perfectly. In both cases they are just models approximating reality, one of which happens to be more elegant. I don't know how anyone would be able to jump straight from epicycles to general relativity.
Quantum mechanics likewise is just an approximation of quantum field theories.
It’s not about elegance for the sake of it. The number of constants in a theory provides a meaningful point of comparison, especially if you need to increase them after an experiment.
Epicycles wasn't a theory, it was a model. It did not try to explain why the planets moved in the sky as they did, it only predicted where they'd be. Neither, for that matter, were copernican or keplerian mechanics theories. They too required unending tweaking because they also were only approximations of what was actually happening. For the first few centuries after heliocentrism was proposed, it gave worse results, and demanded more tweaking. What really won people over was that the phases of the moons of jupiter were accurately predicted by the model as well. The only way to achieve that result with epicycles was to rearrange everything to be mathematically equivalent to a heliocentric model.
You can reconstruct our modern understanding of the motion of the planets in the reference frame of a static earth and produce a mathematically equivalent path that draws out epicycles which predict the positions of planets with exactly the same accuracy as our regular formulations. You can rework the representation of the laws of gravity such that they spit out positions in this reference frame. It is an equally valid model of the cosmos, with exactly the same number of starting assumptions, it's just remarkably more complex.
It started from an actual theory based around the assumption that spherical motion was perfect. They needed 2 which did actually work for a while, eventually the most accurate model needed ~17 with people giving up on the underlying theory as the number of terms destroyed the initial idea.
Today with vastly more data and more accurate measurements you’d need effectively infinite terms, which makes it more obvious but you don’t need that level of absurdity to render judgment.
More complete astronomy data from telescopes showed that epicycles needed to be even more complicated then they were.
If we manage to find better tools for QM where we don't need to perform as much post-selection of experimental data, perhaps we'll also find a simpler model.
The phase space formulation of QM uses less complex numbers than the Schrodinger one: it models states using quasi-probability distributions, where the "probabilities" behave in all the usual ways except they can go negative. Interestingly, the classical limit of this (that is, when h goes to zero) still has negative probabilities in it.
Yes, the post is focusing on the overall effect of operations (unitaries) rather than their continuous trajectories (hamiltonians acting on system via Schrodinger equation) (analogous to working with impulses rather than forces).
To make the continuous case interesting as a compilation problem, you'd need some alternate formulation of the Schrodinger equation, e.g. based on the limit of small powers of unitaries rather than on the matrix exponential, so that deleting i didn't delete literally all processes. Or you could arbitrarily declare real-only hamiltonians are permitted, despite the Schrodinger equation saying "i". But that'd be kinda lame, imo.
(Note: am author of post)
Gidney, that's you?
Huge fan of your work!
I just started my PhD in distributed quantum computing, and my Masters was applying that framework to the QFT.
I came across a number of papers you authored in the process, as well as your blog. In particular, big fan of Kahanamoku-Meyer et al.'s optimistic QFT circuit.
Anyway, keep up the great work!