Took the survey. I almost missed it since I don’t really read Main these days.
Are options 3⁄4 on the BSRI backwards? To me “occasionally” is rarer than “sometimes”.
Took the survey. I almost missed it since I don’t really read Main these days.
Are options 3⁄4 on the BSRI backwards? To me “occasionally” is rarer than “sometimes”.
[Please read the OP before voting. Special voting rules apply.]
The dangers of UFAI are minimal.
Took the survey, including all questions. Hope it is not discarded for contradictory elements.
Purchase fuzzies and utilons separately. Adopting is not going to be anywhere near the most efficient way to improve the world. Certainly do not do it out of a sense of obligation; that will lead to a build up of resentment that will hurt you all. Do it if you want to, but recognise that you’re doing it for your own sake.
That behaviourally people treat free very differently from even $1, and that effective policymaking requires removing even trivial-seeming barriers to desired actions.
You’d expect Silicon Valley working practices to be less optimal than those in mature industries, because, well, the industries aren’t mature. The companies are often run by people with minimal management experience, and the companies themselves are too short-lived to develop the kind of institutional memory that would be able to determine whether such policies were good or bad. Heck, most of SV still follows interview practices that have been actively shown to be useless, to the extent that they’ve been abandoned by the company that originated them (Microsoft). Success is too random for these things to be noticeable; the truth is that in SV, being 50% less efficient probably has negligible effects on your odds of success, because the success or failure of a given company is massively overdetermined (in one direction or the other) by other factors.
The only people in a position to figure this kind of thing out, and then act on that knowledge, are the venture capitalists—and they’re a long way removed from the action (and anyone smart has already left the business since it’s not a good way of making money). Eventually I’d expect VCs to start insisting that companies adopt 40-hour policies, but it’s going to take a long time for the signal to emerge from the noise.
You’re never actually happy. I mean, you’re not happy right now, are you? Evolution keeps you permanently in a state of not-quite-miserable-enough-to-commit-suicide—that’s most efficient, after all.
Well sure, of course you remember being happy, and being sadder than you are now. That motivates you to reproduce. But actually you always felt, and always will feel, exactly like you feel now.
And in five minutes you’ll look back on this conversation and think it was really fun and interesting.
Only if I heard particularly good things about it.
Most creative endeavors you could undertake have a very small chance of leading to external reward, even the validation of people reading/watching/playing them—there’s simply too much content available these days for people to read yours. So I’d advise against making such a thing, unless you find making it to be rewarding enough in itself.
[Please read the OP before voting. Special voting rules apply.]
An AI which followed humanity’s CEV would make most people on this site dramatically less happy.
[Please read the OP before voting. Special voting rules apply.]
The notion of freedom is incoherent. People would be better off abandoning the pursuit of it.
Yes and no. Sometimes certain things are against the rules because they risk injuring someone. I wish more sports would make explicit the difference between the rules you’re allowed to break and pay the penalty and the rules you should never intentionally break, because disagreements over which category a particular rule falls into can be very vicious.
I would rather see mods take matters into their own hands than see a tribunal or other bureaucracy.
I think it is vital that any moderator action be public. If you ban them, fine—but let’s see a great big USER WAS BANNED FOR THIS POST.
I think that if we believe mass downvoting is wrong then there should be a public ex cathedra statement that this is so and any practical technical measures to prevent it should be applied.
You’re asserting a highly nonobvious result (seven billion looks fine from here) as though it were obvious fact.
Does the intent matter? Intended or not, Lord of the Rings has come to occupy a certain cultural position; surely it’s right to ask whether it’s fit for it, even if that position is not the one the original author intended?
Discourage/ban Open threads. They are an unusual thing to have on a an open forum. They might have made sense when posting volume was higher, but right now they further obfuscate valuable content.
I’d say the opposite: the open threads are the part that’s working. So I’d rather remove main/discussion and make everything into open thread, i.e. move to something more like a traditional forum model. I don’t know whether that’s functionally the same thing.
I went to an LW meetup once or twice. With one exception the people there seemed less competent and fun than my university friends, work colleagues, or extended family, though possibly more competent than my non-university friends.
Most things are easier than they look, but writing software that’s free of bugs seems to be an exception: people are terrible at it. So I don’t share your hope.
You’re right; I guess it’s not the witch-hunt side so much as the ad-hoc mob rule that bothers me. I express controversial views on LW, both through my posts and through my moderation; I think the fact that one can do so is one of the most valuable things about the site. The idea that one could be severely punished for an action that didn’t violate any specific rule, but was merely something many in the community disagreed with, would be very chilling.
“Lower Bounds on Superintelligence”. While a lot of LW content is carefully researched, much of what’s posted in support of the singularity hypothesis seems to devolve into just-so stories. I’d like to see a dry, carefully footnoted argument for why an intelligence that was able to derive correct theories from evidence, or generate creative ideas, much faster than humans would necessarily rapidly acquire the ability to eliminate all human life. In particular I’m looking for historical analogies, cases where new discoveries with important practical implications were definitely delayed not just due to e.g. industrial capacity, but solely through human stupidity.
“Trading with entities that are smarter than you”. Given the ability of highly intelligent entities to predict the future better than you can, and deceive without outright lying, what kind of trades or bets is it wise to enter into with such entities? What kind of safeguards would you need to have in place?
“How to get a stupid person to let you out of a box”. Along with, I think, many people who’ve never done it, I find the results of the AI-box experiment highly implausible. I can’t even imagine a superintelligent persuading me to let it out, or, equivalently, I can’t imagine persuading even someone very stupid to let me out. I know the most successful AI players are keeping their strategies secret for reasons I don’t understand (if nothing else, it seems to imply those strategies are exceedingly fragile), but if there’s anyone who has a robust strategy that’s even partially effective I’d be very interested to see it.
“From printing results to destroying all humans”—to me this is the weakest part of the MIRI et al case, and I think most objections we see are variants on this theme. It’s obvious that an oracle-like AI would have to interact with the universe in some sense. It’s obvious that an AI with unbounded ability to interact with the universe would most likely rapidly destroy all humans. It’s nonobvious that there is no possible way to code an AI that can reliably tell the difference between the two, and a solution to this problem naively seems rather more tractable than solving Friendliness in full generality. I’d like to see an exploration of this problem.
“When your gut won’t shut up and multiply.” The recent downvoted discussion post seems to be in this area, suggesting the wider community is perhaps less interested than I, but I’d love to see some practical advice on effective decision strategies when one’s calculated best action is intuitively morally dubious, with anecdotes of the success or failure of particular approaches.
“Times when I noticed I was confused”. In theory, noticing you’re confused sounds like an effective heuristic. But the explanation in the sequences only gave a retroactive example of when Eliezer should have applied it, and didn’t. I’d like to see more examples of when this has and hasn’t worked in practice, and useful habits to acquire that make you more likely to be able to notice.
“I just don’t have enough data to make a decision.”
“Yes, you do. What you don’t have is enough data for you not to have to make one”
http://old.onefte.com/2011/03/08/you-have-a-decision-to-make/