Not all of the MIRI blog posts get cross posted to Lesswrong. Examples include the recent post AGI outcomes and civilisational competence and most of the conversations posts. Since it doesn’t seem like the comment section on the MIRI site gets used much if at all, perhaps these posts would receive more visibility and some more discussion would occur if these posts were linked to or cross posted on LW?
Artaxerxes
Luke’s IAMA on reddit’s r/futurology in 2012 was pretty great. I think it would be cool if he did another, a lot has changed in 2+ years. Maybe to coincide with the December fundraising drive?
I too, have seen it used too early or in contexts where it probably shouldn’t have been used. As long as people don’t use it so much as an explanation for something, but rather as a description or judgement, its use as a curiosity stopper is avoidable.
So I suppose there is a difference between saying “bad thing x happens because of civilisational incompetence”, and “bad thing x happens, which is evidence that there is civilisational incompetence.”
Separate to this concern is that it also has a slight Lesswrong-exceptionalism ‘peering at the world from above the sanity waterline’ vibe to it as well. But that’s no biggie.
The book the page recommends is Kevin Murphy’s Machine Learning: A Probabilistic Perspective. I don’t see any of Chris Bishop’s books on the MIRI list right now, was Pattern Recognition and Machine Learning there at some point? Or am I missing something you’re saying.
Someone has created a fake Singularity Summit website.
(Link is to MIRI blog post claiming they are not responsible for the site.)
MIRI is collaborating with Singularity University to have the website taken down. If you have information about who is responsible for this, please contact luke@intelligence.org.
Ongoing webcomic Strong Female Protagonist has impressed me so far. I recommend reading all of what has currently been drawn of it.
More than a year ago, I read Mortal, a My Little Pony fanfiction with transhumanist themes, and liked it. I recently found out about a short sequel, Mother of Nations, which I also read and enjoyed. If you read Mortal and enjoyed it, you will probably like Mother of Nations.
Mortal has been discussed on LessWrong before, here.
I read Happily Ever After roughly around the time I read Mortal, and I read Clover the Clever just after reading Mother of Nations.
They were all good :)
I recommended Mother of Nations in particular because I realised that there may be people who are in the position I was in, of having read and liked Mortal but not being aware of there being an extra story.
I think I might like Toradora a tiny bit less than you, but apart from that, I’m surprised to agree with pretty much everything you’ve said on these particular titles. I didn’t watch Akuma all the way through though, I figured it was trash about 2 minutes into the first episode and dropped it without looking back.
Although your friend is right about Kishi, it’s pretty much AoT in space in terms of premise.
A Story on MIRI in the Financial Times.
Luke wrote a post on MIRI’s blog acknowledging the story and making a few clarifications.
FAI concerns seem to be getting more and more high profile lately. MIRI, too, seem more competent now than ever, especially when compared to how they were only a few years ago. Am I alone in thinking these kinds of thoughts? Do others feel like these trends will continue?
I’m really not seeing either of your examples, unfortunately. What’s stopping average fitness person from noticing their times aren’t as good in the same way olympian-level person would notice that their times aren’t as good? What’s stopping him from noticing his morning jog or whatever is tougher?
Why wouldn’t average reader have more difficulty parsing badly punctuated writing as well? Why wouldn’t they be able to parse it in different plausible ways too?
I’m just not seeing it.
Edit: To go into more detail:
Person A, B and C are secretly transplanted into a low oxygen universe.
Person A noticed their 200m backstroke time is consistently 4 seconds slower than usual.
Person B notices they have to take walking breaks more often than usual on their morning jog, and took 5 minutes extra to complete their usual route.
Person C has more difficulty doing basic tasks.
Person D, E and F are each given 10 badly punctuated sentences to read.
Person D finds they can parse 4 of them in different plausible ways.
Person E finds they can parse 2 of them in different plausible ways, and can’t parse 2 of them at all.
Person F can’t parse 4 of them at all.
I see these examples as being just as plausible if not more than yours.
Well then actual fitness level is basically irrelevant, and the ability for someone to notice the effects of the environment change is based mostly upon the factor of whether or not they do any excercise.
Plausible, but which certain advances are you thinking of? Do you think what you’re saying is likely? Does that mean next time there are advances, the references will start up again?
He says that the relation of a supercomputer to man will be like the relation of a man to a mouse, rather than like the relation of Einstein to the rest of us; but what if it is like the relation of an elephant to a mouse?
There’s nothing saying there can’t be both entities equivalent to elephant to a mouse and equivalent to man to a mouse. In fact, we have supercomputers today that might loosely fit the elephant to mouse description. In any case, as mice, we don’t have to worry about elephants nearly as much as men, and the existence of elephants might suggest men are around the corner.
It might be a good idea somewhere down the line, but co-ordination of that kind would likely be very difficult.
It might not be so necessary if the problem of friendliness is solved, and AI is built to specification. This would also be very difficult, but it would also likely be much more permanently successful, as a friendly superintelligent AI would ensure no subsequent unfriendly superintelligences arise.
Suppose you or I suddenly woke up with superintelligence, but with our existing goal structure intact (and a desire to be cautious).
Can you show me why a decent person like (I presume) you or I with these new powers would suddenly choose to slaughter the human race as an instrumental goal to accomplishing some other ends?
If CEV (or whatever we’re up to at the moment) turns out to be a dud and human values are inexorably inconsistent and mutually conflicting, one possible solution would be for me to kill everyone and try again, perhaps building roughly humanish beings with complex values I can actually satisfy that aren’t messed up because they were made by an intelligent designer (me) rather than Azathoth.
But really, the problem is that a superintelligent AI has every chance of being nothing like a human, and although we may try to give it innocuous goals we have to remember that it will do what we tell it to do, and not necessarily what we want it to do.
See this Facing the Intelligence Explosion post, or this Sequence post, or Smarter Than Us chapter 6, or something else that says the same thing.
If international leadership could become aware of AI issues, discuss them and sensibly respond to them, I too think that might help in mitigating the various threats that come with AI.
Here are some interesting pieces of writing on exactly this topic:
Did that. So let’s get busy and start try to fix the issues!
Sounds good to me. What do you think of MIRI’s approach so far?
I haven’t read all of their papers on Value Loading yet.
Might we not consider programming in some forms of caution?
Caution sounds great, but if it turns out that the AI’s goals do indeed lead to killing all humans or what have you, it will only delay these outcomes, no? So caution is only useful if we program its goals wrong, it realises that humans might consider that its goals are wrong, and allows us to take another shot at giving it goals that aren’t wrong. Or basically, corrigibility.
Is the recommended courses page on MIRI’s website up to date with regards to what textbooks they recommend for each topic? Should I be taking the recommendations fairly seriously, or more with a grain of salt? I know the original author is no longer working at MIRI, so I’m feeling a bit unsure.
I remember lukeprog used to recommend Bermudez’s Cognitive Science over many others. But then So8res reviewed it and didn’t like it much, and now the current recommendation is for The Oxford Handbook of Thinking and Reasoning, which I haven’t really seen anyone say much about.
There are a few other things like this, for example So8res apparently read Heuristics and Biases as part of his review of books on the course list, but it doesn’t seem to appear on the course list anymore, and under the heuristics and biases section Thinking and Deciding is recommended (once reviewed by Vaniver).