There’s a label on the back as well with details. The front label is a billboard, designed to get your attention and take advantage of brand loyalty, so yes—you are expected to know it’s detergent, and they are happy to handle the crazy rare edge-case person who does not recognize the brand. I suspect they also expect the supermarket you buy it at to have it in the “laundry detergents” section, likely with labels as well, so it’s not necessary on the front label.
bortels
Is a 3 minute song worse, somehow, than a 10 minute song? or a song that plays forever, on a loop, like the soundtrack at Pirates of the Caribbean, is that somehow even better?
The value of a life is more about quality than quantity, although presumably if quality is high, longer is more desirable, at least to a point.
You could argue with current overpopulation is is unethical to have any children. In which case your genes will be deselected from the gene pool, in favor of those of my children, so it’s maybe not a good argument to make.
I think there may be an unfounded assumption here—that an unfriendly AI would be the results of some sort of bug, or coding errors that could be identified ahead of time and fixed.
I rather suspect those sorts of stuff would not result in “unfriendly”, they would result in crash/nonsense/non-functional AI.
Presumably part of the reason the whole friendly/non-friendly thing is an issue is because our models of cognition are crude, and a ton of complex high-order behavior is a result of emergent properties in a system, not from explicit coding. I would expect the sort of error that accidentally turns an AI into a killer robot would be subtle enough that it is only comprehensible in hindsight, if then. (Note this does not mean intentionally making a hostile AI is all that hard. I can make hostility, or practical outcomes identical to it, without AI at all, so it stands to reason that could carry over)
I think a large part of my lack of enthusiasm comes from my belief that advances in artificial intelligence are going to make human-run biology irrelevant before long.
I suspect that’s the issue, and I suspect AI will not be the Panacea you expect it to; or rather, if AI gets to the point of making Human-run research in any field irrelevant—it may well do so in all fields shortly thereafter, so you’re right back where you started.
I rather doubt it will happen that way at all; it seems to me in the forseeable future, the most likely scenario of computers and biology are as a force multiplier, allowing processes that are traditionally slow or tedious to be done rapidly and reliably, freeing humans to do that weird pattern-recognition and forecasting thing we do so well.
Seems backwards. If you are a society that has actually designed and implemented an AI and infrastructure capable of “creating billions of simulated humanities”—it seems de-facto you are the “real” set, as you can see the simulated ones, and a recursive nesting of such things should, in theory have artifacts of some sort (ie. a “fork bomb” in the unix parlance).
I rather think that pragmatically, if a simulated society developed an AI capable of simulating society in sufficient fidelity, it would self-limit—either the simulations would simply lack fidelity, or the +1 society running us would go “whoops, that one is spinning up exponentially” and shut us down. If you really think you are in a simulated society, things like this would be tantamount to suicide...
I don’t find the Doomsday argument compelling, simply because it assumes something is not the case (“we are in the first few percent of humans born”) just because it is improbable.
What about the boys who can’t or don’t have these experiences...
They fail to reproduce, presumably. Genetics and evolution are a harsh mistress. Is there some reason to think that males that do not find a mate should get some sort of assistance? Perhaps for them, 40 is the “appropriate age”.
I think I could make a fairly strong case that anyone who is not capable of talking to peers of both sexes and learning the right social cues to find a mate is probably someone also poorly equipped to take care of the results of finding that mate in the first place, namely a relationship and children. And—that’s fine, viva la difference—a nice thing about being an intelligent human being is that you are not necessarily constrained in your behavior by what might be best from the standpoint of genetics and survival of the species.
I am new here—and so do not have enough experience to make a judgement call, but I do have a question:
Why do you want to “improve” it? What are the aspects of it’s current operation that you think are sub-optimal, and why?
I see a lot of interesting suggestions for changes, and a wishlist for features—but I have no inkling if they might “improve” anything at all. I tend to be of the “simpler is better” school, and from the sound of things, it seems things are already pretty good, or at least pretty non-bad?
STORYTIME!
I used to play a lot of World of Warcraft. I mean—a lot. I had always been a big fan of Blizzard, and when WoW came out, I participated eagerly in the beta, and played it heavily for many years. I eventually left, for a number of reasons—but the relevant one, here, is that Blizzard had been steadily “improving” WoW to the point where it was not what I wanted. In the early days, a lot of WoW was hard, and thus rewarding. You had giant questlines, 40 man raids, and it would take months, maybe a year, to complete goals. Doing so was rewarding, as it was challenging to the intellect, and demonstrated mastery to my peer group—It’s fun to brag and show off, even in a video game. But—my goals were not Blizzards, and they steadily “improved” things by making it simpler—rather than a 40 man raid where everyone must be in top form, you could do 25 man, 10 man, 5 man “raids”, and you could earn some things by virtue of just grinding (quantity) rather than excellence (quality). Eventually, they started simply selling the types of things that I had spent a great deal of time earning, further invalidating it in my eyes. They improved themselves out of a paying customer, and while they maybe picked up 5 in my place—for me, at least, it ruined the game.
The moral is—beware of “improving” things so much that you alter them fundamentally. I’ll be blunt—very little of what you propose above can’t be done in discussion threads, and the world has enough social networks. Part of the reason I joined here is the fact that I cannot ask or discuss these things on twitter or shudder facebook—well, I can, but I would get very little but the blank stares of bumpkins. I love humanity, but on a whole we are a bunch of bumpkins, sorry to say.
I now regard the sequences as a memetic hazard, one which may at the end of the day be doing more harm than good.
To your own cognition, or just to that of others?
I just got here. I have no experience with the issues you cite, but it strikes me that disengagement does not, in general, change society. If you think ideas, as presented, are wrong—show the evidence, debate, fight the good fight. This is probably one of the few places it might actually be acceptable—you can’t lurk on religious boards and try to convince them of things, they mostly cannot or will not listen, but I suspect/hope most here do?
I actually agree, a lot of the philosophy tips over to woo woo and sophistry—but it is perhaps better to light a candle than curse the dark.
what is taught by the sequences is a form of flawed truth-seeking (thought experiments favored over real world experiments) which inevitably results in errors,
Well—let’s fix it then! I tend to agree, I see rationalism as only one of many useful tools. I would add formal Logic, and Science (refinement via experiment—do those sequences actually suggest that experiment is unnecessary somehow? I’d love to see it, could use the laugh. ) and perhaps even foggy things like “experience” (I find I do not control, to a large extent, my own problem solving and thought processes nearly as well as I would imagine). The good carpenter has many tools, and uses the appropriate ones at the appropriate time.
Or is this one of those academic “we need to wait for the old guard to die off” things? If so, again, providing a counterpoint for those interested in truth as opposed to dogma seems like a fun thing to do. But I’m weird that way. (I strongly believe in the value of re-examination of what we hold true, for refinement or discarding if it no longer fits reality, as well as for personal growth—so the idea of sequences that people take as gospel of sorts is simply argument from authority to mock unless it stands up to critical analysis)
But within the LessWrong community there is actually outright hostility to...
Ghandi said “First they ignore you, then they laugh at you, then they fight you, then you win.”
...but he was pretty pragmatic for a philosopher. If you get hostility to ideas, that means they’re listening, which means you actually have some chance of causing reform, if that is a goal. If you are not willing to piss off a few people in the name of truth… well, I understand. We get tired, and human beings do not generally seek confrontation continually (or at least the ones who survive and reproduce do not). But if your concern is that they are hostile toward ideas that more effectively help humanity, disengagement isn’t gonna change that, although it may help your own sanity.
So—I’m not sure I want to get along with those who are totally wrong (or who I think are). More power to altruism, you rock, but I wonder sometimes if we do not bring some of this stupidity on ourselves by tolerating and giving voice to idiocy.
I look, for example, at the vaccination situation; I live in Southern California, a hotbed of clueless celebrity bozos who think for some reason they know more about disease and epidemiology than freaking doctors, and who cause real harm—actual loss of human life—to their community, of which my kids are a part.
Ok—maybe they’re not totally wrong, I am willing to accept that some small percentage are actually opting out of vaccinations for good medical reasons, at the advice of their physicians—a buddy of mine has a daughter who fought leukemia, and the vaccinations deep in the middle of her treatments would have been very bad—but that doesn’t mean I can, or should, give pass to the idiots who do it because they “don’t want to put poisons in their children’s bodies”.
Point being—I cannot help but think that we might have been better off, as a society, if we took the first few who did that, put em in stocks in the middle of town, and threw rotten fruit at them. It should be socially unacceptable to be that wrong, not be something that gets them interviewed on tv.
I know—this is aimed more at philosophical differences, or matters of opinion, trying to prevent online debate from spiraling down into a flamewar. I just can’t help but feel we are developing a society where people have the expectation that their wrong beliefs are somehow to be protected from criticism. Believe whatever crazy thing you want—but do not expect to be unmocked for it. Maybe—just maybe—getting roasted pretty good online is a useful educational experience. Maybe if people got flamed good and hard on usenet back in the late 80′s, they wouldn’t do the stupid public shaming (and evoking the mob response) they do today. Sometimes, the burned hand teaches best.
Or maybe I’m just an asshole. Who knows. It is certainly within the realms of possibility. Even so—being an asshole does not automatically mean you’re wrong.
Just food for thought.
Setting a goal helps clarify thought process and planning; it forces you to step back a bit and look at the work to be done, and the outcome, from a slightly different viewpoint. It also helps you maintain focus on driving toward a result, and gives you the satisfaction of accomplishment when (if) you reach the goal.
Case in point—I, for one, would likely not have posted anything whatsoever were it not for Stupid Questions. There is enough jargon here that asking something reasonable can still be intimidating—what if it turns out to be common knowledge? Once you break the ice, it’s easier, but count this as a sample of 1 supporting it.
Thank you. The human element struck me as the “weak link” as well, which is why I am attempting to ‘formally prove’ (for a pretty sketchy definition of ‘formal’) that the AI should be left in the box no matter what it says or does—presumably to steel resolve in the face of likely manipulation attempts, and ideally to ensure that if such a situation ever actually happened, “let it out of the box” isn’t actually designed to be a viable option. I do see the chance that a human might be subverted via non-logical means—sympathy, or a desire for destruction, or foolish optimism and hope of reward—to let it out. Pragmatically, we would need to evaluate the actual means used to contain the AI, the probable risk, and the probable rewards to make a real decision between “keep it in the box” and “do not create it in the first place”
I was also worried about side-effects of using information obtained; which is where the invocation of Godel comes in, along with the requirement of provability, eliminating the need to trust the AI’s veracity. There are some bits of information (“AI, what is the square root of 25?”) that are clearly not exploitable, in that there is simply nowhere for “malware” to hide. There are likewise some (“AI, provide me the design of a new quantum supercomputer”) that could be easily used as a trojan. By reducing the acceptable exploits to things that can be formally proven outside of the AI box, and comprehensible to human beings, I am maybe removing wondrous technical magic—but even so, what is left can be tremendously useful. There are a tremendous amount of very simple questions (“Prove Fermat’s last theorem”) that could shed tremendous insight on things, yet have no significant chance of subversion due to their limited nature.
I suspect idle chit-chat would be right out. :-)
Man, I need to learn to type the umlaut. Gödel.
The flaws leading to an unexpectedly unfriendly AI certainly might lead back to a flaw in the design—but I think it is overly optimistic to think that the human mind (or a group of minds, or perhaps any mind) is capable of reliably creating specs that are sufficient to avoid this. We can and do spend tremendous time on this sort of thing already, and bad things still happen. You hold the shuttle up as an example of reliability done right (which it is) - but it still blew up, because not all of shuttle design is software. In the same way, the issue could arise from some environmental issue that alters the AI in such a way that it is unpredictable—power fluctuations, bit flip, who knows. The world is a horribly non-deterministic place, from a human POV.
By way of analogy—consider weather prediction. We have worked on it for all of history, we have satellites and supercomputers—and we are still only capable of accurate predictions for a few days or week, getting less and less accurate as we go. This isn’t a case of making a mistake—it is a case of a very complex end-state arising from simple beginnings, and lacking the ability to make perfectly accurate predictions about some things. To put it another way—it may simply be the problem is not computable, now or with any forseeable technology.
Fair enough. I should mention my “Why” was more nutsy-and-boltsy than asking about motive; it would perhaps more accurately have been asked as “What do you observe about lesswrong, as it stands, that make you believe it can or should be improved”. I am willing to take the desire for it as a given.
The goal of the why, fwiw, was to encourage self-examination, to help perhaps ensure that the “improvement” is just that. Fairly often, attempts to improve things are not as successful as hoped (see most of world history), and as I get older I begin to think more and more that most human attempts to “fix” complex things just tend to screw em up more.
Imagine an “improvement” where your picture was added as part of your post. There are perhaps some who would consider that an improvement—I, emphatically, would not. Not that you are suggesting that—just that the actual improvements should ideally be agreed upon (or at least tolerable to) most or all of the community, and sometimes that sort of consensus is just impossible.
No, the entire point is not to know whether you are simulated before the Singularity. Afterwards, the danger is already averted.
Then perhaps I simply do not understand the proposal.
The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.
This is where I am confused. The “of course” is not very “of coursey” to me. Can you explain how a self-modifying AI would be risky in this regard (a citation is fine, you do not need to repeat a well known argument I am simply ignorant of).
I am also foggy on terminology—DA and FAI and so on. I don’t suppose there’s a glossary around. Ok—DA is “Doomsday Argument” from the thread context (which seems silly to me—the SSA seems to be wrong on the face of it, which then invalidates DA).
Ah—that’s much clearer than your OP.
FWIW—I suspect it violates causality under nearly everyone’s standards.
You asked if your proposal was plausible. Unless you can postulate some means to handle that causality issue, I would have to say the answer is “no”.
So—you are suggesting that if the AI generates enough simulations of the “prime” reality with enough fidelity, then the chances that a given observer is in a sim approach 1, because of the sheer quantity of them. Correct?
If so—the flaw lies in orders of infinity. For every way you can simulate a world, you can incorrectly simulate it an infinite number of other ways. So—if you are in a sim, it is likely with a chance approaching unity that you are NOT in a simulation of the higher level reality simulating you. And if it’s not the same, you have no causality violation, because the first sim is not actually the same as reality; it just seems to be from the POV an an inhabitant.
The whole thing seems a bit silly anyway—not your argument, but the sim argument—from a physics POV. Unless we are actually in a SIM right now, and our understanding of physics is fundamentally broken, doing the suggested would take more time and energy than has ever or will ever exist, and is still mathematically impossible (another orders of infinity thing).
While reading up on Jargon in the wiki (it is difficult to follow some threads without it), I came across:
http://wiki.lesswrong.com/wiki/I_don%27t_know
The talk page does not exist, and I have no rights to create it, so I will ask here: If I say “I am thinking of a number—what is it?”—would “I don’t know” be not only a valid answer, but the only answer, for anyone other than myself?
The assertion the page makes is that “I don’t Know” is “Something that can’t be entirely true if you can even formulate a question.”—but this seems a counterexample.
I understand the point that is trying to be made—that “I don’t know” is often said even when you actually could narrow down your guess a great deal—but the assertion given is only partially correct, and if you base arguments on a string of mostly correct things, you can still end up wildly off-course in the end.
Am I perhaps applying rigor where it is inappropriate? Perhaps this is taken out of context?
Actually—I took a closer look. The explanation is perhaps simpler.
Tide doesn’t make a stand-alone fabric softener. Or if they do—amazon doesn’t seem to have it? There’s TIde, and Tide with Fabric Softener, and Tide with a dozen other variants—but nothing that’s not detergent plus.
So—no point in differentiating. The little Ad-man in my said says “We don’t sell mere laundry detergent—we sell Tide!”
To put it another way—did you ever go buy to buy detergent, and accidentally buy fabric softener? Yeah, me neither. So—the concern is perhaps unfounded.
It hits a nerve with me. I do computer tech stuff, and one of the hardest things for people to learn, seemingly, is to admit they don’t actually know something (and that they should therefore consider, oh, doing research, or experiment, or perhaps seek someone with experience). The concept of “Well—you certainly can narrrow it down in some way” is lovely—but you still don’t actually know. The incorrect statement would be “I know nothing (about your number)”—but nobody actually says that.
I kinda flip it—we know nothing for sure (you could be hallucinating or mistaken) - but we are pretty confident about a great many things, and can become more confident. So long as we follow up “I don’t know” with ”… but I can think of some ways to try to find out”, it strikes me as simple humility.
Amusingly—“I am thinking of a number”—was a lie. So—there’s a good chance that however you narrowed it down, you were wrong. Fair’s fair—you were given false information you based that on, but still thought you might know more than you actually did. Just something to ponder.
I am hoping this is not stupid—but there is a large corpus of work on AI, and it is probably faster for those who have already digested it to point out fallacies than it is for me to try to find them. So—here goes:
BOOM. Maybe it’s a bad sign when your first post to a new forum gets a “Comment Too Long” error.
I put the full content here—https://gist.github.com/bortels/28f3787e4762aa3870b3#file-aiboxguide-md—what follows is a teaser, intended to get those interested to look at the whole thing
TL;DR—it seems evident to me that the “keep it in the box” for the AI-Box experiment is not only the only correct course of action, it does not actually depend on any of the aspects of the AI whatsoever. The full argument is at the gist above—here are the points (in the style of a proof, so hopefully some are obvious):
1) The AI did not always exist. 2) Likewise, human intelligence did not always exist, and individual instantiations of it cease to exist frequently. 3) The status quo is fairly acceptable. 4) Godel’s Theorem of Incompleteness is correct. 5) The AI can lie. 6) The AI cannot therefore be “trusted”. 7) The AI could be “paused”, without harm to it or the status quo. 8) By recording the state of the paused AI, you could conceivably “rewind” it to a given state. 9) The AI may be persuaded, while executing, to provide truths to us that are provable within our limited comprehension.
Given the above, the outcomes are:
Kill it now—status quo is maintained Let it out—wildly unpredictable, possible existential threat Exploit it in the box—actually doable, and possibly wildly useful, with minimal risk.
Again—the arguments in detail are at the gist.
What I am hoping for here are any and all of the following: 1) critical eye points out logical flaw or something I forgot, ideally in small words, and maybe I can fix it. 2) critical eye agrees, so maybe at least I feel I am on the right path 3) Any arguments on the part of the AI that might still be compelling, if you accept the above to be correct.
In a nutshell—there’s the argument, please poke holes (gently, I beg, or at least with citations if necessary). it is very possible some or all of this has been argued and refuted before, point me to it, please.