Of the universal approximation theorem
Max TK
Memetic Judo #1: On Doomsday Prophets v.3
Memetic Judo #2: Incorporal Switches and Levers Compendium
Memetic Judo #3: The Intelligence of Stochastic Parrots v.2
(retired article) AGI With Internet Access: Why we won’t stuff the genie back in its bottle.
Good point. I think I will add it later.
About point 1: I think you are right with that assumption, though I believe that many people repeat this argument without having really a stance on (or awareness of) brain physicalism. That’s why I didn’t hesitate to include it. Still, if you have a decent idea of how to improve this article for people who are sceptical of physicalism, I would like to add it.
About point 2: Yeah you might be right … a reference to OthelloGPT would make it more convincing—I will add it later!
Edit: Still, I believe that “mashup” isn’t even a strictly false characterization of concept composition. I think I might add a paragraph explicitly explaining that and how I think about it.
weakly suggested that more dimensions do reduce demon formation
This also makes a lot of sense intuitively, as it should become more difficult in higher dimensions to construct walls (hills / barriers without holes).
Good idea! I thought of this one: https://energyhistory.yale.edu/horse-and-mule-population-statistics/
Interesting insight. Sadly there isn’t much to be done against the beliefs of someone who is certain that god will save us.
Maybe the following: Assuming the frame of a believer, the signs of AGI being a dangerous technology seem obvious on closer inspection. If god exists, then we should therefore assume that this is an intentional test he has placed in front of us. God has given us all the signs. God helps those who help themselves.
Maybe if it happens early there is a chance that it manages to become an intelligent computer virus but is not intelligent enough to further scale its capabilities or produce effective schemes likely to result in our complete destruction. I know I am grasping at straws at this point, but maybe it’s not absolutely hopeless.
The result could be a corrupted infrastructure and a cultural shock strong enough for the people to burn down OpenAI’s headquarters (metaphorically speaking) and AI-accelerating research to be internationally sanctioned.
In the past I have thought a lot about “early catastrophe scenarios”, and while I am not convinced it seemed to me that these might be the most survivable ones.
One very problematic aspect of this view that I would like to point out is that in a sense, most ‘more aligned’ AGIs of otherwise equal capability level seem to be effectively ‘more tied down’ versions, so we should assume them to have a lower effective power level than a less aligned AGI that has a shorter list of priorities.
If we imagine both as competing players in a strategy game, it seems that the latter has to follow fewer rules.
I think that’s not an implausible assumption.
However this could mean that some of the things I described might still be too difficult for it to pull them off successfully, so in the case of an early breakout dealing with it might be slightly less hopeless.
Update: Because I want to include this helpful new paragraph in my article and I am unable to reach Will, I am now adding it anyways (it seems to me that this is in spirit of what he intended). @Will: please message me if you object
Lovely; can I add this to this article if I credit you as the author?
I think a significant part of the problem is not the LLMs trouble of distinguishing truth from fiction, it’s rather to convince it through your prompt that the output you want is the former and not the latter.
#parrotGang
My argument does not depend on the AI being able to survive inside a bot net. I mentioned several alternatives.
On How Yudkowsky Is Perceived by the Public
Over the recent months I have been able to gather some experience as an AI safety activist. One of my takeaways is that many people I talk to do not understand Yudkowsky’s arguments very well.
I think this is for 2 reasons mainly:
A lot of his reasoning requires a kind of “mathematical intuition” most people do not have. In my experience it is possible to make correct and convincing arguments that are easier to understand, or even invest more effort into explaining some of the more difficult ones.
I think he is used to a lesswrong-lingo that sometimes gets in the way of communicating with the public.
Still I am very grateful that he continues to address the public and I believe that it is probably a net positive, I think over the recent months the public AI-safety discourse has begun to snowball into something bigger, other charismatic people continue picking up the torch, and I think his contribution to these developments has probably been substantial.