Many of the calculations on the brain capacity are based on wrong assumptions. Is there an original source for that 2.5 PB calculation? This video is very relevant to the topic if you have some time to check it out:
mukashi(Adri A)
Thanks so much🙏
Same I would do in Slack! I simply have some work groups in Discord, that’s why
Is this available for discord?
Great! Can you make that, if I input P for hypothesis A, 1 - P appears automatically for Hypothesis B?
This should be curated. Just reading this list is a good exercise for those people that attribute a very high probability to a single possible scenario.
I don’t see why Jaynes is wrong. I guess it depends on the interpretation? If two humans are chasing the same thing there is a limited amount of it, of course they are in conflict with each other. Isn’t that what Jaynes is pointing at?
Good post, I hope to read more from you
Yeah, sorry about that. I didn’t put much effort into my last comment.
Defining intelligence is tricky, but to paraphrase EY, it’s probably wise not to get too specific since we don’t fully understand Intelligence yet. In the past, people didn’t really know what fire was. Some would just point to it and say, “Hey, it’s that shiny thing that burns you.” Others would invent complex, intellectual-sounding theories about phlogiston, which were entirely off base. Similarly, I don’t think the discussion about AGI and doom scenarios gets much benefit from a super precise definition of intelligence. A broad definition that most people agree on should be enough, like “Intelligence is the capacity to create models of the world and use them to think.”
But I do think we should aim for a clearer definition of AGI (yes, I realize ‘Intelligence’ is part of the acronym). What I mean is, we could have a more vague definition of intelligence, but AGI should be better defined. I’ve noticed different uses of ‘AGI’ here on Less Wrong. One definition is a machine that can reason about a wide variety of problems (some of which may be new to it) and learn new things. Under this definition, GPT4 is pretty much an AGI. Another common definition on this forum is an AGI is a machine capable of wiping out all humans. I believe we need to separate these two definitions, as that’s really where the core of the crux lies.
What is an AGI? I have seen a lot of “not a true scotman” around this one.
I guess the crux here for most people is the timescale. I agree actually that things can get eventually very bad if there is no progress in alignment etc, but the situation is totally different if we have 50 or 70 years to work on that problem or, as Yudkowsky keeps repeating, we don’t have that much time because AGI will kill us all as soon as it appears.
The standard argument you will probably listen is that AGI will be capable of killing everyone because they can think so much faster than humans. I haven’t seen yet a serious engagement from doomers to the argument of capabilities. I agree with everything you said here and to me these arguments are obviously right.
Any source you would recommend to know more about the specific practices of Mormons you are referring to?
The Babbage example is the perfect one. Thank you, I will use it
This would clearly put my point in a different place from the doomers
I would place myself also in the right upper quadrant, close to the doomers, but I am not one of them.
The reason is that it is not very clear to me the exact meaning of “tractable for a SI”. I do think that nanotechnology/biotechnology can progress enormously with SI, but the problem is not only developing the required knowledge, but creating the economic conditions to make these technologies possible, building the factories, making new machines, etc. For example nowadays, in spite of the massive demand of microchips worldwide, there are very very few factories (and for some specific technologies the number of factories is n=1). Will we get there eventually? Yes. But not at the speed that EY fears.
I think you summarised pretty well my position in this paragraph:
“I think another common view on LW is that many things are probably possible in principle, but would require potentially large amounts of time, data, resources, etc. to accomplish, which might make some tasks intractable, if not impossible, even for a superintelligence. ”
So I do think that EY believes in “magic” (even more after reading his tweet), but some people might not like the term and I understand that.
In my case using the word magic does not refer only at breaking the laws of physics. Magic might refer to someone who holds such a simplified model of the world that think, that you can make in a matter of days all those factories, machines and working nanotechnology (on the first try) and then succesfully deploy them everywhere killing everyone, and that we will get to that point in a matter of days AND that there won’t be any other SI that could work to prevent those scenarios. I don’t think I am misrepresenting EY point of view here, correct me otherwise,
If someone believed that a good group of engineers working one week in a spacecraft model could succesfuly 30 years later in an asteroid close to Proxima Centaury, would you call it magical thinking? I would. There is nothing beyond the realm of physics here! But it assumes so many things and it is so stupidly optimistic that I would simply dismiss it as nonsense.
I agree with this take, but do those plans exist, even in theory?
This is fantastic. Is there anything remotely like this available for Discord?
I don’t see how that implies that everyone dies.
It’s like saying, weapons are dangerous, imagine what would happen if they fall in the wrong hands. Well, it does happen and sometimes that have bad consequences but there is no logical connection between that and everyone dying, which is what doom means. Do you want to argue that LLMs are dangerous? Fine. No problem with that. But doom is not that.
This thing?
https://www.scientificamerican.com/article/what-is-the-memory-capacity/