Despite the source and tone, these comments make sense to me. It looks suspicious when an organization tries to change its name to a more euphemistic version of its old one, and for most people AI refers to narrow AI and not singularity-causing AI.
I don’t see this as an attempt to mislead. The Technological Singularity is currently a halo-tainted sensationalist term inextricably tied to Kurzwell. The mission of SI is to mitigate the potential x-risk from a recursively self-improving intelligence, which is rather different from what VInge and Kurzwell had in mind, and rather more mundane. While I am not sold on everything SI says or does, I can see how a name better reflecting what SI actually is about can be useful.
The mission of SI is to mitigate the potential x-risk from a recursively self-improving intelligence, which is rather different from what VInge and Kurzwell had in mind and rather more mundane.
In which way? Recall the EY vs Hansen foom debate. The foom is a singularity by an AI in a basement on commodity hardware, couple weeks to extremely superhuman, and all the ‘work’ done here assumes something so strongly superhuman you don’t need to be concerned with any algorithmic complexity or anything, and it’s only goals that matter (and the goals are somehow physical, like number of paperclips or number of staples). Idea taken from Vinge by the way.
If anything, the Singularity Institute is already too broad, as it is only concerned with particularly extreme form of technological singularity.
There’s a crucial difference between strongly anti-social process of changing the string to maximize donations, and reasonably good willed name change: in reasonable name change, first you redefine the goals and get some plan then you make a name reflecting the goals (and what you are actually doing). E.g. you change your mind and decide to dedicate some of the work to something like self driving car safety. Then, in light of this broader focus, you come up with the new name “AI safety institute” or something similar. You keep what you’re actually planning on doing in sight and you try to summarize it. In the anti-social process, you sit and model the society’s response, and you look how society responded, and you try to come up with best string, typically hiding away any specificities behind euphemisms because, ultimately, being specific lets people know more about you quicker.
They sit together in circle and think—okay, we lost a donor because of the name, let’s change the name, let’s come up with a good name. Never mind descriptiveness, never mind not trying to mislead, never mind that the cause of the loss was the name being descriptive. That’s called swindling people out of their money. Especially if you go ahead and try to interfere with how it is to be evaluated, to eliminate the possibility that ‘if researchers are cranks we won’t get money because researchers will demonstrate themselves to be cranks’. Anyone asks me if it’s worth donating there, I’ll tell, no, it’s just some bunch of sociopaths whom sat in circle and thought how to improve their appearance, but haven’t done anything technical that they could of failed at if they lacked technical ability, haven’t even sat and worked on something technical to improve appearance. I won’t even say ‘its probably cranks’. It’s beyond honest crankery now.
edit: or maybe it is actually a good thing. Call yourselves “Centre for AI safety”, then it is easily demonstrated you don’t work on self driving car safety or anything of this kind, ergo, a bunch of fraudsters.
I don’t care about opinion of a bunch that is here on LW. Also, that goal was within that particular thread. At the current point I am expressing my opinion on what I think about this whole anti-social activity of sitting, looking at how a string was processed, and making another string as to maximize donations (and the general enterprise of looking at “why people think we’re cranks” and changing just the appearance). Centre for AI safety, huh. No one ever done anything that doesn’t rely on extreme singularity scenario (FOOM), and it’s a centre for AI safety, something that from the name oughta work on safety of self driving cars. (you may not care about my opinion which is totally fine)
And do you think this “activity of sitting, looking at how a string was processed, and making another string as to maximize donations” works to increase donations?
I dunno if it works, it ought to work if you are rational, but can easily backfire in many ways. It is unfriendly to society at large in much same way how paperclip maximizer is unfriendly, sans the power.
Despite the source and tone, these comments make sense to me. It looks suspicious when an organization tries to change its name to a more euphemistic version of its old one, and for most people AI refers to narrow AI and not singularity-causing AI.
I don’t see this as an attempt to mislead. The Technological Singularity is currently a halo-tainted sensationalist term inextricably tied to Kurzwell. The mission of SI is to mitigate the potential x-risk from a recursively self-improving intelligence, which is rather different from what VInge and Kurzwell had in mind, and rather more mundane. While I am not sold on everything SI says or does, I can see how a name better reflecting what SI actually is about can be useful.
In which way? Recall the EY vs Hansen foom debate. The foom is a singularity by an AI in a basement on commodity hardware, couple weeks to extremely superhuman, and all the ‘work’ done here assumes something so strongly superhuman you don’t need to be concerned with any algorithmic complexity or anything, and it’s only goals that matter (and the goals are somehow physical, like number of paperclips or number of staples). Idea taken from Vinge by the way.
If anything, the Singularity Institute is already too broad, as it is only concerned with particularly extreme form of technological singularity.
There’s a crucial difference between strongly anti-social process of changing the string to maximize donations, and reasonably good willed name change: in reasonable name change, first you redefine the goals and get some plan then you make a name reflecting the goals (and what you are actually doing). E.g. you change your mind and decide to dedicate some of the work to something like self driving car safety. Then, in light of this broader focus, you come up with the new name “AI safety institute” or something similar. You keep what you’re actually planning on doing in sight and you try to summarize it. In the anti-social process, you sit and model the society’s response, and you look how society responded, and you try to come up with best string, typically hiding away any specificities behind euphemisms because, ultimately, being specific lets people know more about you quicker.
They sit together in circle and think—okay, we lost a donor because of the name, let’s change the name, let’s come up with a good name. Never mind descriptiveness, never mind not trying to mislead, never mind that the cause of the loss was the name being descriptive. That’s called swindling people out of their money. Especially if you go ahead and try to interfere with how it is to be evaluated, to eliminate the possibility that ‘if researchers are cranks we won’t get money because researchers will demonstrate themselves to be cranks’. Anyone asks me if it’s worth donating there, I’ll tell, no, it’s just some bunch of sociopaths whom sat in circle and thought how to improve their appearance, but haven’t done anything technical that they could of failed at if they lacked technical ability, haven’t even sat and worked on something technical to improve appearance. I won’t even say ‘its probably cranks’. It’s beyond honest crankery now.
edit: or maybe it is actually a good thing. Call yourselves “Centre for AI safety”, then it is easily demonstrated you don’t work on self driving car safety or anything of this kind, ergo, a bunch of fraudsters.
You currently have 290 posts on LessWrong and Zero (0) total Karma.
This is a poor way to accomplish your goal.
Negative total karma scores are displayed as 0.
Yes, I know; he’s −51 for the last 30 days.
I don’t care about opinion of a bunch that is here on LW. Also, that goal was within that particular thread. At the current point I am expressing my opinion on what I think about this whole anti-social activity of sitting, looking at how a string was processed, and making another string as to maximize donations (and the general enterprise of looking at “why people think we’re cranks” and changing just the appearance). Centre for AI safety, huh. No one ever done anything that doesn’t rely on extreme singularity scenario (FOOM), and it’s a centre for AI safety, something that from the name oughta work on safety of self driving cars. (you may not care about my opinion which is totally fine)
I suppose it’s too much to ask that a moderator get involved with someone who is clearly here to vent rather than provide constructive criticism.
And do you think this “activity of sitting, looking at how a string was processed, and making another string as to maximize donations” works to increase donations?
I dunno if it works, it ought to work if you are rational, but can easily backfire in many ways. It is unfriendly to society at large in much same way how paperclip maximizer is unfriendly, sans the power.
Can a moderator please deal with private_messaging, who is clearly here to vent rather than provide constructive criticism?
Others: please do not feed the trolls.