So when Dumbledore asked the Marauder’s Map to find Tom Riddle, did it point to Harry?
calef
This is a good point. The negative side gives good intuition for the “negative temperatures are hotter than any positive temperature” argument.
The distinction here goes deeper than calling a whale a fish (I do agree with the content of the linked essay).
If a layperson asks me what temperature is, I’ll say something like, “It has to do with how energetic something is” or even “something’s tendency to burn you”. But I would never say “It’s the average kinetic energy of the translational degrees of freedom of the system” because they don’t know what most of those words mean. That latter definition is almost always used in the context of, essentially, undergraduate problem sets as a convenient fiction for approximating the real temperature of monatomic ideal gases—which, again, is usually a stepping stone to the thermodynamic definition of temperature as a partial derivative of entropy.
Alternatively, we could just have temperature(lay person) and temperature(precise). I will always insist on temperature(precise) being the entropic definition. And I have no problem with people choosing whatever definition they want for temperature(lay person) if it helps someone’s intuition along.
Because one is true in all circumstances and the other isn’t? What are you actually objecting to? That physical theories can be more fundamental than each other?
I just mean as definitions of temperature. There’s temperature(from kinetic energy) and temperature(from entropy). Temperature(from entropy) is a fundamental definition of temperature. Temperature(from kinetic energy) only tells you the actual temperature in certain circumstances.
Only one of them actually corresponds with temperature for all objects. They are both equal for one subclass of idealized objects, in which case the “average kinetic energy” definition follows from the the entropic definition, not the other way around. All I’m saying is that it’s worth emphasizing that one definition is strictly more general than the other.
I think more precisely, there is such a thing as “the average kinetic energy of the particles”, and this agrees with the more general definition of temperature “1 / (derivative of entropy with respect to energy)” in very specific contexts.
That there is a more general definition of temperature which is always true is worth emphasizing.
I’m don’t see the issue in saying [you don’t know what temperature really is] to someone working with the definition [T = average kinetic energy]. One definition of temperature is always true. The other is only true for idealized objects.
According to http://arxiv.org/abs/astro-ph/0503520 we would need to be able to boost our current orbital radius to about 7 AU.
This would correspond to a change in specific orbital energy of 132712440018/(2(1 AU)) to 132712440018 / (2(7 AU)). (where the 12-digit constant is the standard gravitational parameter of the sun. This is like 5.6 10^10 in Joules / Kilogram, or about 3.4 10^34 Joules when we restore the reduced mass of the earth/sun (which I’m approximating as just the mass of the earth).
Wolframalpha helpfully supplies that this is 28 times the total energy released by the sun in 1 year.
Or, if you like, it’s equivalent to the total mass energy of ~3.7 * 10^18 Kilograms of matter (about 1.5% the mass of the asteroid Vespa).
So until we’re able to harness and control energy on the order of magnitude of the total energetic output of the sun for multiple years, we won’t be able to do this any time soon.
There might be an exceedingly clever way to do this by playing with orbits of nearby asteroids to perturb the orbit of the earth over long timescales, but the change in energy we’re talking about here is pretty huge.
I feel like this is just a really obnoxious argument about definitions.
I especially feel like this is a really obnoxious argument about definitions when the wiki article quotes things like:
“Take the supposed illusion of change. This must mean that something, X, appears to change when in fact it does not change at all. That may be true about X; but how could the illusion occur unless there were change somewhere? If there is no change in X, there must be a change in the deluded mind that contemplates X. The illusion of change is actually a changing illusion. Thus the illusion of change implies the reality of some change. Change, therefore, is invincible in its stubbornness; for no one can deny the appearance of change.”
So, to taboo a bunch of words, and to try and state my take on the actual issue as I understand it (including some snark):
B theory: Let there be this thing called spacetime which encodes all moments of time (past,present, future) and space (i.e., the universe). The phenomenal experience of existence is akin to tracking a very particular slice of spacetime move along at the speed that time inches forward, as observed by me.
A theory: My mind is the fundamental metaphysical object, and moments of “time” can only be oriented with respect to my immediate phenomenal experience of reality. Trying to say something about a grand catalog of time (including the future) robs me of this phenomenal experience because I know what I’m feeling, and I’m feeling the phenomenal experience of existing right now, dammit! Point to that on your fancy spacetime chart!
Read this way, I suppose the most succinct objection of the A-theorist is: “If all of spacetime exists, all reference frames are equivalent, etc. etc., why am I, in this moment, existing right now?” To which, I imagine, a B-theorist would respond by saying, “Because you’re right here,” and would then point to their location on the spacetime chart.
But this isn’t actually an argument about what time is like. It’s an argument about how whether or not we should privilege the phenomenal experience of existing—of experiencing the now. That is, does me experiencing life right at this moment mean that this moment is special?
I suppose I can see why people that aren’t computationalists would be bothered by the B theory, because it does rob you of that special-ness.
- Nov 1, 2014, 9:55 AM; 5 points) 's comment on Why is the A-Theory of Time Attractive? by (
Not that I actually believe most of what I wrote above (just that it hasn’t yet been completely excluded), if QG introduced small nonlinearities to quantum mechanics, fun things could happen, like superluminal signaling as well as the ability to solve NP-Complete and P#-Complete problems in polynomial time (which is probably better seen as a reason to believe that QG won’t have a nonlinearity).
You might be asking the wrong question. For example, the set of papers satisfying your first question:
What are the most important or personally influential academic papers you’ve ever read? (call this set A)
has almost no overlap with what I would consider the set of papers satisfying:
Which ones are essential (or just good) for an informed person to have read? (call this set B)
And this is for a couple of reasons. Scientific papers are written to communicate, “We have evidence of a result—here is our evidence, here is our result.” with fairly minimal grounding of where that result stands within the broader scientific literature. Yes, there’s an introduction section usually filled with a bunch of citations, and yes there’s a conclusion section, but papers are (at least in my field) usually directed at people that are already experts in what the paper is being written about (unless that paper is a review article).
And this is okay. Scientific papers are essentially rapid communications. They’re a condensed, “I did this!”. Sometimes they’re particularly well written and land in category A above. But I can’t think of a single paper in my A column that I’d want a layman to read. None of them would make any sense to an “informed” layman.
My B column would probably have really good popular books written by experts—something like Quantum Computing Since Democritus, or, like others have said, introductory level textbooks.
This article is marked as controversial and has been locked, see talk page for details.
Quantum computing winter
The Quantum computing winter was the period from 1995 to approximately October 2031 when experimental progress on the creation of fault tolerant quantum computers stalled despite significant effort at constructing the machines. The era ended with the publication of the Kitaev-Kalai-Alicki-Preskill (KKAP) theorem in early 2030 which purported to show that the construction of fault-tolerant quantum computers was in fact impossible due to fundamental constraints. The theorem was not widely accepted until experiments performed by Mikhail Lukin’s group in early 2031 verified the bounds provided in the KKAP theorem.
Early history
Quantum computing technology looked promising in the late 20th and early 21st century due to the celebrated Fault Tolerance theorems, as well as the rapid experimental progress towards satisfying the fault tolerance threshold. The Fault Tolerance theorem, which at the time was thought to be based on reasonable assumptions, guaranteed scalable, fault tolerant quantum computation could be performed—provided an architecture could be built that had an error rate smaller than a known bound.
In the early 2010s, superconducting qubit architectures designed by John Martinis’ group at Google, and then HYPER Inc., looked poised to satisfy the threshold theorems, and considerable work was done to build scaled architectures with many millions of physical qubits by the mid 2020s.
However, despite what seemed to be guarantees via threshold theorems for their architectures, the Martinis group was never able to report large concurrences for more than 12 (disputed) logical qubits.
The scalability wall
Parallel to the development of the scalable, silicon architectures, many groups continued work on other traditional schemes like neutral atoms, trapped ions, and Nuclear Magnetic Resonance (NMR) based devices. These devices, in turn, ran into the now named Scalability Wall of 12 (disputed) entangled encoded qubits. For a discussion on the difference between encoded and physical qubits, see the discussion in Quantum error correction.
The Martinis group hoped that polishing their hardware, and scaling the size of their error correction schemes would allow them to surpass the limit, but progress stalled for more than a decade.
Correlated noise catastrophe
Alexei Kitaev, building on earlier work by Gil Kalai, Robert Alicki, and John Preskill published a series of papers in the late 2020s, culminating in the 2030 theorem now known as the KKAP Theorem, or the Noise Catastrophe Theorem. This proof traced how fundamental limits on the noise experienced by quantum mechanical objects irretrievably destroys the controllability of quantum systems beyond only a few qubits. Uncontrollable correlations were shown to arise in any realistic noise model, essentially disproving the possibility of large scale quantum computation.
Aftermath (This section has been marked as controversial, see the talk page for details)
The immediate aftermath of the publication of the proof was disbelief. Almost all indications pointed towards scalable quantum computation being possible, and that only engineering problems stood in the way of truly scalable quantum computation. The Nobel Prize (2061) winning work of Mikhail Lukin’s team at Harvard only reinforced the shock felt by the Quantum Information community when the bounds provided in the KKAP Theorem’s proof were explicitly saturated via cold atom experiments. Funding in quantum information science rapidly dwindled in the following years, and the field of Quantum Information was nearly abandoned. The field has since been reinvigorated by Kitaev’s recent proof of the possibility of Quantum Gravitational computers in 2061.
Not being in the field, but having experience in making the judgement “Should I read this paper”, here are a handful of observations:
For:
The paper has a handful of citations not entirely from the author (http://scholar.google.com/scholar?cites=8141802968877948536&as_sdt=2005&sciodt=0,5&hl=en) but by no means a huge number of citations.
The abstract is remarkably clear (it’s clear that this is a slight extension of other author’s work), and the jargon-y words are easily figured out based on gentle perusal of the paper.
It looks like this paper is actually also a chapter in a textbook (http://link.springer.com/chapter/10.1007/978-3-642-11876-0_8)
Against:
Nearly half of the paper’s (very few) references in its reference section are self-citations.
I’d say it’s worth reading if you’re interested in it. Even the against-point above is more of a general heuristic and not necessarily a bad thing.
Fusion is technologically possible (c.f., the sun). It just might not be technologically easy.
I disagree that “giving answers is an irreversible operation”. My setup explicitly doesn’t “forget” the calculation (the calculation being simulating someone proving the Riemann hypothesis, and us extracting that proof from the simulation), and my setup is explicitly reversible (because we have the full density matrix of the system at all times, and can in principle perform unitary time evolution backwards from the final state if we wanted to).
Nothing is ever being forgotten. I’m not sure where that came from, because I’ve never claimed that anything is being forgotten at any step. I’m not sure why you’re insisting that things be forgotten to satisfy reversibility, either.
I’m suggesting that the person running the simulation knows the state of the simulation at all times. If this bothers you, pretend everything is being done digitally, on a classical computer, with exponential slowdown.
Such a calculation can be done reversibly without ever passing information into the system.
I’m not sure who you’re talking about because I’m the person above referring to someone writing on paper—and the paper was meant to also be within the simulation. The simulator is “reading the paper” by nature of having perfect information about the system.
“Reversible” in this context is only meant to describe the contents of the simulation. Computation can occur completely reversibly.
Yeah, it’s already been changed: