In principle, I suppose there could be multi-dimensional voting, with at least different dimensions for degree of agreement, for how well-argued a comment is, and for degree of relevance to the topic (or at least sub-thread). Of course, if one goes far enough down that road just choosing the multidimensional vote starts to become an energy drain in and of itself… (www.ted.com has at least 8 dimensions for rating their talks—which is enough to dissuade me from rating them...)
soreff
Are there studies of behavior changes for terminally ill people? That wouldn’t probe changes in financial behavior—winning the lottery isn’t useful to someone with pancreatic cancer. Do we see recreational drug use rise?
I’m not convinced that it is a reasonable common regime to be in for utils(dollars). I think that it might be a reasonably common response to physical trauma: 1001 blows to the head are not as much worse than 1000 blows as the first blow was (particularly if the 100th was fatal...).
There is something related (albeit about an industry, rather than a community) in http://www.shirky.com/weblog/2009/03/newspapers-and-thinking-the-unthinkable/
“Revolutions create a curious inversion of perception. In ordinary times, people who do no more than describe the world around them are seen as pragmatists, while those who imagine fabulous alternative futures are viewed as radicals. The last couple of decades haven’t been ordinary, however. Inside the papers, the pragmatists were the ones simply looking out the window and noticing that the real world was increasingly resembling the unthinkable scenario. These people were treated as if they were barking mad. Meanwhile the people spinning visions of popular walled gardens and enthusiastic micropayment adoption, visions unsupported by reality, were regarded not as charlatans but saviors.”
3 isn’t all that different from things we do know our brains do: Consider how our visual system extrapolates across our blind spots, or how we reconstruct memories. If I can construe “approximates from insufficient information” as “hallucinates”, then 3 is rather reasonable.
A co-worker of mine regularly responds to counterexamples of software designs, examples which show where the design breaks, with “Show me an example from a real user case”. :-(
Consider the scale of the living standard drop Robin is predicting: roughly an order of magnitude. That’s much larger than the great depression—which set the stage for WWII. A prospect of an order of magnitude drop in living standards would probably be enough to trigger wars of extermination. To put it another way, a cross between “Strong Security” and a policy by population-limiting nations to limits the populations of neighbors as well (with nukes if needed) could be stable.
Consider an analogy to the cells in our own bodies. Cells can divide (with some exceptions), yet the cells in our bodies do not keep dividing till they run into local resource limits, the equivalent of subsistence limits. There are signalling systems that tell healthy cells when they are “supposed” to stop dividing, and these mostly work. The analog to saying that people will evolve to get around obstacles that stop them from breeding is that cells will mutate till they are dominated by cancer cells. That isn’t the whole story. Our immune system kills off most of the malignant cells we produce—we have social systems at various levels which could do the equivalent. If we (as a global society—a kind of weak singleton) can add layers of control faster than breeding mutations pile up, we may be able to contain runaway breeding indefinitely.
Two comments:
a) In the cs domain, suppose that the phenomenon that you were trying to model was the output of a cryptographic-quality pseudo-random generator for which you did not know the seed. Would you expect to be able to model its output accurately?
b) My gut reaction to your original post was that I’d expect them to partition roughly between cases where there is lots of experimental data compared to the parameter space of the system in question vs. where the parameter space is much larger than the reasonably accessible volume of experimental data. Of course, one doesn’t really know the parameter space till one has a successful model :-( …
Ok. In a sense, all of the difference between your intution and your friend’s intuition can be viewed as how to construe “most”. There are lots of systems in both categories. There is also a bias in which ones we research: Unless a problem is extraordinarily important, if attempts to build models for a phenomenon keep failing, and we have any reason to suspect e.g. chaotic behavior, we fall back to e.g. settling for statistical information.
Also, there is a question of how much precision one is looking for: The orbits of the planets look like clockwork even on moderately long timescales—but there do turn out to be chaotic dynamics (I think the part with the fastest divergence turns out to be one of the orbital elements of Mars, iirc), and this injects chaotic dynamics into everything else, if you want to predict far enough into the future.
This is a side issue but I’m curious as to what people’s reactions are: I’m kind-of hoping that dark matter turns out to be massive neutrinos. Of the various candidates, it seems like the most familiar and comforting. We’ve even seen neutrinos interact in particle detectors, which is way more than you can say for most of the other alternatives… Compared to axions or supersymmetric particles, or WIMPs, massive neutrinos have have more of the comfort of home. Anyone feel similarly?
You have a point. I have a somewhat similar view of elements above perhaps Einsteinium. I’ll be more impressed with physics’ control over the electroweak interaction when I see the weak nuclear force equivalent of an electromagnet :-) I wonder what is the maximum particle energy that someone has actually used in a non-elementary-particle-physics-research application? Maybe the incoming beam for a spallation neutron source, somewhere in the MeV range?
Thanks, point taken—I’d been thinking of more exotic WIMPs
I’m missing something. Suppose that my preferences are strictly transitive, but that they violate the other axioms and that there are lots of trades which I view as incomparable (none of AB holds), and that I won’t make an incomparable trade. Why would this leave me vulnerable to being money pumped?
Thanks, I was wondering if all of the axioms were crucial, or mostly the transitivity one.
Perhaps “incomparable” is the wrong approximation. Perhaps a better way to view it is that I view transactions as having frictional costs (if nothing else, the cost of working out to sufficient precision what my actual preferences are). There are a lot of (A, B) pairs such that, if I had A and was offered B in exchange, I would turn down the offer, and the same if I had B and was offered A.. Very roughly, assume that I treat each exchange transaction as having some probability of going wrong in some way (e.g. failing in such a way that I wind up with neither object), so the new object’s utility has to be say 10% higher than the old object’s utility to offset the transaction risk.
Would this model leave me vulnerable to being money pumped?
Any thoughts on what the impact of the http://www.research.ibm.com/deepqa/ IBM Watson Deepqa project would be on a Foom timescale, if it is successful (in the sense of approximate parity with human competitors)? My impression was that classical AI failed primarily because of brittle closed-world approximations, and this project looks like it (if successful) would largely overcome those obstacles. For instance, it seems like one could integrate a deepqa engine with planning and optimization engines in a fairly straightforward way. To put it another way, in the form of an idea futures proposition: conditional on deepqa being fully successful at a human-equivalent level, what are the odds of a human equivalent UFAI in all areas needed for recursive self-improvement plus replication without human intervention in say 5 years after that point? Best wishes, -Jeff
I tend to ignore the dust theory simply because entities which are implemented as scattered states throughout spacetime can’t be interacted with. Even just inverting the order of the states is enough to make interaction impossible—two observers with opposite time directions don’t see each other as having any memories of past interactions.
Thanks for the response! One way of looking at whether to take implementations of minds scattered across disconnected dust seriously is to look at the sets of minds we do know about and extrapolate from there. All existing minds that we know of (human, animal—even including any computation that responds to the world—even down to a thermostat) consists of causally connected states. “Dust” minds have at least the problems that:
since we can’t interact with them, what would constitute an experiment to demonstrate that they are really there? Are they observable in any way?
since the causal processes in the world can’t interact with them, they can’t be tuned by the evolutionary processes that created us or other minds, which again puts them outside the set of minds we would extrapolate from those we’ve observed
Do they survive Occam’s razor?
I’m not convinced. For instance, I can point to plenty of examples of logic inverters that respond to causally to changes in their input logic states by making the inverse changes in their outputs. How does one slice and dice the states of the physical world to label some disconnected set of them as a “dust” inverter? In other words, if one explicitly enlarges the definition of a computing system to include “dust” systems, can one point to a correspondingly enlarged set of data on working examples?
http://en.wikipedia.org/wiki/De_Bono_Hats