I was going to respond with a biting “well then what the heck is the point of LW?” post, but I think you got the point:
I don’t think the philosophy is going to stay afloat for very long if it’s practitioners aren’t able to follow the technical details of what people are actually doing in the domain they’d like to philosophize about.
Frankly without a willingness to educate oneself about implementation details, the philosophizing is pointless. Maybe this is a wakeup call for me to go find a better community :\
EDIT: Who created the StarDestroy AI mini-FAQ? Do we know their real-world identity?
Frankly without a willingness to educate oneself about implementation details, the philosophizing is pointless. Maybe this is a wakeup call for me to go find a better community :\
I was hoping more of a study technical AI details and post about them here, but whatever works. If you do find a better community, post a note here somewhere.
EDIT: Who created the StarDestroy AI mini-FAQ? Do we know their real-world identity?
My goal is to enact a positive singularity. To that end I’m not convinced of the instrumentality of educating people on the interwebs, given other things I could be doing.
I had thought that a community with a tight focus on ‘friendly AGI’ would be interested in learning, and discussing how such an AGI might actually be constructed, or otherwise getting involved in some way. If not, I don’t think it’s worth my time to correct this mistake.
I was going to respond with a biting “well then what the heck is the point of LW?” post, but I think you got the point:
Frankly without a willingness to educate oneself about implementation details, the philosophizing is pointless. Maybe this is a wakeup call for me to go find a better community :\
EDIT: Who created the StarDestroy AI mini-FAQ? Do we know their real-world identity?
I was hoping more of a study technical AI details and post about them here, but whatever works. If you do find a better community, post a note here somewhere.
Michael Wilson, looks like.
My goal is to enact a positive singularity. To that end I’m not convinced of the instrumentality of educating people on the interwebs, given other things I could be doing.
I had thought that a community with a tight focus on ‘friendly AGI’ would be interested in learning, and discussing how such an AGI might actually be constructed, or otherwise getting involved in some way. If not, I don’t think it’s worth my time to correct this mistake.
Oh really? :-D