Without actual game levels I can’t decide whether I’d enjoy Alligator eggs or not. In Manufactoria you have to build Turing machines. I doesn’t sound like fun, but I loved it immensely.
DanielVarga
I am happy to see math and physics puzzles and curiosities on Discussion, if the writeup is good enough. (In this case, it is not.) I am very unhappy to see these “are you smart enough?” gotcha posts that are Thomas’ speciality. And calling you out like this was just creepy.
It sure will
How much momentum will it lose before it bounces back? If a large enough wall can make this arbitrarily small, then I think the Fredkin and Toffoli billiard gates can be built out of a thick wall of billiard balls. Lucky thing, in this model there is no friction, so gates can be arbitrarily large. Sure, the system might start to misbehave after the walls move by epsilon, but this doesn’t seem like a serious problem. In the worst case, we can use throw-away gates that are abandoned after one use. That model is still as strong as Boolean circuits.
Ah, now I see your point. I had this misconception that if you send a billiard ball into a huge brick-wall of billiard balls, it will bounce back. Okay, I don’t have a design.
You are right. Originally I became interested in purely photon-based computation because I had an even more speculative idea that seemed to require it. If you have a system that terraforms everything in its path and expands with exactly the speed of light, then you are basically unavailable to outside observation. You can probably see where this line of thought leads. I am aware of the obvious counterargument, but as I explained there, it is a bit weaker than it first appears.
Maybe you are right, but it is not immediately obvious to me that small cross-section is a deadly problem. You shouldn’t look at one isolated photon-photon encounter as a logic gate. Even an ordinary electronic transistor would not work without error correction. Using error correction, you can build complex systems that seem like magic when you attempt to understand them at the level of individual electrons.
What I mean by “in principle” is not that different from what Fredkin and Toffoli mean by it when talking about their billiard ball computer. The intuition is that when you figured out that some physical system can be harnessed for computation in principle, then you can start working on noise tolerance and energy consumption, and usually it turns out that those are not the show-stopper parts. And when I eventually try to link “in principle” to “in practice”, I am still not talking about the scale of human engineering. You say you need to generate light for the system, and a strong gravitational field to trap the photons? I say, fine, I’ll rearrange these galaxies into laser guns and gravitational photon traps for you.
You can build outside walls out of billiard balls. Eventually, such a system will disintegrate, but this is no different from any other type of computer. The important thing is that for any given computation length you can build such a system. The size of the system will grow with required computation length, but only polynomially.
I don’t know much about photon-photon scattering, but I do know that the cross section is very small. I see this as something that does not make a difference from a strictly theoretical point of view, but that might be because I don’t understand the issues. Photonic crystals are not really relevant for my thought experiments, because you definitely can’t build computers out of them that expand with the asymptotic speed of light. Maybe if you can turn regular material into photonic crystal by bombarding it with photons.
It’s an intriguing idea, a pure photon-based gate based on elastic scattering of photons, however I don’t see how such a system would function, even in principle.
I have no idea either. All that I have is a flawed analogy: We could in principle build a computer consisting of nothing but billiard balls as constituent parts. This would work even if meeting billiard balls, instead of bouncing off each other, just changed their trajectories slightly, with a very small probability. I’d like to know whether this crude view of photon-photon scattering is A. a simplification that helps focus on the interesting part of the question, or B. a terrible misunderstanding.
Now I’ll tell the original motivation behind the question. As an old LW regular, you have probably seen some phrase like “turn our future light cone into computronium” tossed out during some FAI discussion. What I am interested in is how to actually do that optimally, if you are limited by nothing but the laws of physics. In particular, I am interested in whether the optimal solution involves light-speed (or asymptotically light-speed) expansion, or (for entropy or other considerations) does not actually end up eating the whole light cone.
Obviously this is not my home turf, so maybe it is not even true that the scattering question is relevant at all when we try to answer the computronium question. I would appreciate any insights about either of them or their relationship.
Can photon-photon scattering be harnessed to build a computer that consists of nothing but photons as constituent parts? I am only interested in theoretical possibility, not feasibility. If the question is too terse in this form, I am happy to elaborate. In fact, I have a short writeup that tries to make the question a bit more precise, and gives some motivation behind it.
- Dec 24, 2012, 1:23 AM; 3 points) 's comment on UFAI cannot be the Great Filter by (
The essay you linked to acknowledges the existence of the coordination problems I am talking about, and promises a Part 2 where it deals with them. This Part 2 is not yet published.
Sure, I never meant to imply that the issue is clear-cut. Many of the people revealed to be informers argued that they only reported the most innocent things about the people they were tasked to spy on. Tens of thousands of books are written about such moral dilemmas. When people decide that Schindler is a hero, they seem to use a litmus test that is similar but definitely not identical to replaceability. They ask: Did he do more than what can reasonably be expected from him under his circumstances? I don’t think focusing on the replaceability part of this very complex question helps clear things up.
I wasn’t trying to say anything deep, really. If the replaceability argument works for investment bankers, then it works for henchmen of an oppressive regime, too. In my country, many people actually used the replaceability argument, without the fancy name. And in hindsight people in my county agree that they shouldn’t have used the argument. So yeah, maybe it’s the modus tollens. But maybe it’s simpler than that: maybe these people misjudged being completely replaceable. In the eighties more and more people dared to say no to the Hungarian secret service, with less and less consequences.
By the way, the apparently yet-unpublished part 2 of jkaufman’s link will deal with this issue.
As Douglas_Knight shows, my comment wasn’t really well thought out. However, the idea is that a reflective decision theory agent considers the implications of the fact that whatever her decision is, similar agents will reach a similar decision. This makes them cooperate in Prisoner’s Dilemma—Tragedy of the Commons situations where “if all of us behaved so selfishly, we would be in big trouble”. The thing is sometimes called superrationality.
I find the replaceability assumption very problematic, too. If this wasn’t LW, I would simply state the obvious an say that all sorts of evil stuff can be justified by replaceability. But this is LW, so I’ll say that replaceability is not true for reflective decision theories.
I am a great fan of both guys, but I don’t think Weiner’s bitterness goes well with Yudkowsky’s pathos.
paying 75 cents a day to have an extra 17 hours of leisure a year
that is, paying 75 cents a day for 2 extra minutes of leisure. For a U.S. knowledge worker this is probably a good deal, but for example, for a Hungarian housewife this is ridiculously bad, even if you work with more realistic annualized costs.
That was an interesting read, thanks. But I laughed out loud when they explained how they increased the IQ variance of their sample. The original study worked with students from the University of Bern, a group too homogeneous with respect to intelligence. To increase diversity, the replication works with students from Georgia Tech, Georgia State University, and Michigan State University. That is playing to the stereotypes.