Did the survey. Mischief managed.
asr
Yes. To strengthen the point:
After Roosevelt got the Einstein letter, very little happened. The White House did not actually jump into action or start spending money. The thing was referred to a committee that moved very lethargically. Progress only picked up several years later, once the British, independently, had figured out that a bomb was practical, and had started making noise about it.
Taken and look forward to seeing the results. Thanks for putting this together.
I think “from the Harvard Crimson” is a misleading description.
One of their undergraduate columnists had a very silly column. Undergraduates do that sometimes. Speaking as a former student newspaper columnist, often these columns are a low priority for the authors, and they’re thrown together in a hurry the night before they’re due. The column might not even represent what the author would think upon reflection, let alone what the editorial board of the Crimson as a whole believes. So I wouldn’t read too much into this.
(For non-US readers: The Harvard Crimson is the student-produced newspaper of Harvard University. The editors and writers are generally undergraduates and they don’t reflect any sort of institutional viewpoint.)
It’s a short post, so you can read it quickly. What do you think about his argument?
I think it’s silly. I suspect MIRI and every other singulatarian organization, and every other individual working on the challeges of unfriendly AI, could fit comfortably in a 100-person auditorium.
In contrast, “trying to fix industrial capitalism” is one of the main topics of political dispute everywhere in the world. “How to make markets work better” is one of the main areas of research in economics. The American Economic Association has 18,000 members. We have half a dozen large government agencies, with budgets of hundreds of millions of dollars each, to protecting people from hostile capitalism. (The SEC, the OCC, the FTC, etc etc, are all ultimately about trying to curb capitalist excess. Each of these organizations has a large enforcement bureaucracy, and also a number of full-time salaried researchers.)
The resources and human energy devoted to unfriendly AI are tiny compared to the amount expended on politics and economics. So it’s strange to complain about the diversion of resources.
- 16 Jan 2014 16:57 UTC; 4 points) 's comment on Open Thread for January 8 − 16 2014 by (
Professor Evans-Veres is at Oxford, so he’s probably a well-above-average biochemist.
Bear in mind that the question isn’t “can top biochemistry professors help stop/undo death”—it’s “can a high-end biochemist be of help, if you can do magic and rearrange matter at the molecular level.” And that seems relatively plausible.
The Manhattan Project is a very misleading example. Yes, it was “secret”, in that nothing was published for outside review. But the project had a sizeable fraction of all the physics talent in the western world associated with it. Within the project, there was a great deal of information sharing and discussion; the scientific leadership was strongly against “need to know policies.”
At that scale, having outside review is a lot less necessary. Nobody in AI research is contemplating an effort of that scale, so the objection to secrecy is valid.
I didn’t down-vote, but was tempted to. The original post seemed content-free. It felt like an attempt to start a dispute about definitions and not a very interesting one.
It had an additional flaw, which is that it presented its idea in isolation, without any context on what the author was thinking, or what sort of response the author wanted. It didn’t feel like it raised a question or answered a question, and so it doesn’t really contribute to any discussion.
I would have understood this post better if it had a short introduction to tell me what AIXI-tl is, and why I care about it. As it is, the thing I learned is that there exists a formalism of learning that don’t work in some contexts—which doesn’t surprise me.
I don’t mean this to be dismissive—It sounds like there is an interesting point in there somewhere, but right now, readers who aren’t experts in learning theory probably can’t get it.
I don’t believe athletic competition is zero-sum. The status gain of the winners isn’t offset by a status loss of the losers. In fact, the losers often come out with a gain in status, assuming they play well.
Another way to see that it’s positive-sum is as follows: A close-fought game results in more status for both sides than does a rout. If the game were zero-sum, that status had to come from somewhere. But in fact, if the losers play better, both sides come out better than if the losers lost, badly.
Conclusion: athletics and similar competition is positive-sum, and the size of the total status gain depends on the talent being displayed.
There is an ambiguity in how people are using the world “algorithm”
Algorithm-1 is a technique that provably accomplishes some identifiable task X, perhaps with a known allowance for error. This is mostly what we talk about in computer science as “algorithms”. Algorithm-2 is any process that can be described and that sometimes terminates. It might not come with any guarantees, it might not come with any clear objectives. A heuristic is an example of Algorithm-2.
Note that this distinction is observer-dependent. Unintelligible code is Algorithm-2, but it becomes Algorithm-1 when you learn what it does and why it works.
Human intelligence is an example of Algorithm-2 and not an example of Algorithm-1 for our purposes.
Machines can do both Algorithm-1 and Algorithm-2.
As near as I can tell, you are highlighting the fact that we don’t have an Algorithm-1 for AI design. But that doesn’t mean there isn’t an Algorithm-2 that accomplishes it and doesn’t mean we won’t find that Algorithm-2.
Note that an Algorithm-2 can output an Algorithm-1 and a proof/explanation of it. There’s nothing stopping us from building a giant-and-horrible combination of theorem prover, sandbox, code mutator, etc, that takes as input a spec, and might at some point output an algorithm, with a proof, that meets the spec.
Immediate consequence the second: Austrian school economics, in its reliance on allowing markets to come to equilibrium on their own, is inhuman.
I suspect all economics is inhuman. I suspect that any complex economy that connects millions or billions of people is going to be incomprehensible and inhuman. By far the best explanation I’ve heard of this thought is by Cosma Shalizi.
The key bit here is the conclusion:
There is a fundamental level at which Marx’s nightmare vision is right: capitalism, the market system, whatever you want to call it, is a product of humanity, but each and every one of us confronts it as an autonomous and deeply alien force. Its ends, to the limited and debatable extent that it can even be understood as having them, are simply inhuman. The ideology of the market tell us that we face not something inhuman but superhuman, tells us to embrace our inner zombie cyborg and lose ourselves in the dance. One doesn’t know whether to laugh or cry or run screaming.
But, and this is I think something Marx did not sufficiently appreciate, human beings confront all the structures which emerge from our massed interactions in this way. A bureaucracy, or even a thoroughly democratic polity of which one is a citizen, can feel, can be, just as much of a cold monster as the market. We have no choice but to live among these alien powers which we create, and to try to direct them to human ends. It is beyond us, it is even beyond all of us, to find “a human measure, intelligible to all, chosen by all”, which says how everyone should go.
Marie shared the 1903 Nobel prize in chemistry with her husband and Bequerel. Seems like relevant authorities at the time thought she had a substantial role. Why should we believe you rather than the Nobel Committee? It’s not like 1903 was a big year for establishment scientists looking for female mascots...
I’m not well versed on the early history of programming languages, and don’t want to opine based on glancing at Wikipedia. But Hopper appears to have been involved in a bunch of pre-Fortran work on higher-level languages: http://en.wikipedia.org/wiki/A-0_System—so this isn’t simply about COBOL.
Mapping stars and especially mapping planets turned out to be really important for the development of astronomy. Constellations turn out to be a useless concept. Asking lots of people what constellations they see or where they think the boundaries are would have been wasted astronomical effort.
To return to the real topic under discussion: It might be the case that values are useless and we should only talk about preferences, or somesuch. I am agnostic on this point; I wanted to give an example of how some concept might turn out to be not worth collecting empirical data on.
I’m puzzled why people think putting a bunch of unsocialized children in a pile will turn them into civilized adults.
The impression I have of public schools (at least the good ones) is that younger children are pretty closely supervised, and that much of what elementary teachers do all day is say “No Johnny, that wasn’t nice, apologize to Suzy”, or “Suzy, you need to share the scissors with Tommy.”
The children are practicing social skills with each other, but it’s a structured environment with adult supervision, and with adults who are specifically trained and tasked to help improve the children’s social skills and emotional maturity.
An elementary school classroom that feels like Lord of the Flies, socially, is a very badly run classroom.
Hrm. There’s a category 2.5 of “advice which is common knowledge to long-standing members of a small subculture but not to the public and that isn’t written down.” These are cases where the information isn’t secret, but is primarily conveyed word-of-mouth, not in print, and where it’s too specialized to be “conventional wisdom”.
It might be that there’s some piece of advice that most expert, say, patent lawyers would give you, but that wouldn’t be in a standard book because the topic is too narrow or esoteric.
In my case, I’ve gotten valuable career advice from senior members of my profession. I don’t think it was a unique boon to me and their advice parallels conventional wisdom, but with details that are specific to my narrow field.
honest people can’t stay self deluded for very long.
This is surely not true. Lots of wrong ideas last a long time beyond when they are, in theory, recognizably wrong. Humans have tremendous inertia to stick with familiar delusion, rather than replace them with new notions.
Consider any long-lived superstition, pseudoscience, etc. To pick an uncontroversial example, astrology. There were very powerful arguments against it going back to antiquity, and there are believers down to the present. There are certainly also conscious con artists propping up these belief structures—but they are necessarily the minority of purported believers. You need more victims than con artists for the system to be stable.
People like Newton and Kepler—and many eminent scientists since—were serious sincere believers in all sorts of mystical nonsense—alchemy, numerology, and so forth. I’s possible for smart careful people to persistently delude themselves—even when the same people, in other contexts, are able to evaluate evidence accurately and form correct conclusions.
High frequency stock trading.
The attack that people are worrying about involves control of a majority of mining power, not control of a majority of mining output. So the seized bitcoins are irrelevant. The way the attack works is that the attacker would generate a forged chain of bitcoin blocks showing nonsense transactions or randomly dropping transactions that already happened. Because they control a majority of mining power, this forged chain would be the longest chain, and therefor a correct bitcoin implementation would try to follow it, with bad effects. This in turn would break the existing bitcoin network.
The government almost certainly has enough compute power to mount this attack if they want.
There’s an asymmetry, which is that the poster isn’t asking other people to give them money.