Having only read the headline, I came to this thread with the intention of saying that I agree with much of what he said, up to and potentially including withholding further funds from SI.
But then I read the post and find it’s asking a different but related question, paraphrased as, “Why doesn’t SI just lay down and die now that everyone knows none of their arguments have a basis in reality?” Which I’m inclined to disagree with.
No, what I complained about is the lack of work on SI part to actually try to check if it is correct, knowing that negative would mean that it has to dissolve. Big difference. SI should play Russian roulette (with the reality and logic as revolver) now—it is sure the bullet is not in the chamber—and maybe die if it was wrong.
No, what I complained about is the lack of work on SI part to actually try to check if it is correct, knowing that negative would mean that it has to dissolve. Big difference. SI should play Russian roulette (with the reality and logic as revolver) now—it is sure the bullet is not in the chamber—and maybe die if it was wrong.
So you think they should work on papers, posts, and formal arguments?
I think they should work more on ‘dissolving if their work is counter-productive’, i.e. incorporate some self evaluation/feedback, which, if consistently negative would lead to not asking for any more money. To not do that makes them a scam scheme, plain and simple. ( I do not care that you truly believe here is an invisible dragon in your garage, if you never tried to, say, spread flour to see it, or otherwise check. Especially if you’re the one repackaging that dragon thing for popular consumption )
Yes, I understand the analogy and how it applies to SI, except the ‘spreading flour’ step where they test them. What actions should they take to perform the test?
Well, for example, Eliezer can try to actually invent something technical, most likely fail (most people aren’t very good at inventing), and then cut down his confidence in his predictions about AI. (and especially in intuitions because the dangerous AI is incredibly clever inventor of improvements to itself, and you’d better be a good inventor or your intuitions from internal self observation aren’t worth much). On more meta level they can sit and think—how do we make sure we aren’t mistaken about AI? Where could our intuitions be coming from? Are we doing something useful or have we created a system of irreducible abstractions? etc. Should have been done well before Holden’s post.
edit: i.e. essentially, SI is doing a lot of symbol manipulation type activity to try to think about AI. Those symbols may represent some irreducible flawed concepts, in which case manipulating them won’t be of any use.
Having only read the headline, I came to this thread with the intention of saying that I agree with much of what he said, up to and potentially including withholding further funds from SI.
But then I read the post and find it’s asking a different but related question, paraphrased as, “Why doesn’t SI just lay down and die now that everyone knows none of their arguments have a basis in reality?” Which I’m inclined to disagree with.
No, what I complained about is the lack of work on SI part to actually try to check if it is correct, knowing that negative would mean that it has to dissolve. Big difference. SI should play Russian roulette (with the reality and logic as revolver) now—it is sure the bullet is not in the chamber—and maybe die if it was wrong.
So you think they should work on papers, posts, and formal arguments?
I think they should work more on ‘dissolving if their work is counter-productive’, i.e. incorporate some self evaluation/feedback, which, if consistently negative would lead to not asking for any more money. To not do that makes them a scam scheme, plain and simple. ( I do not care that you truly believe here is an invisible dragon in your garage, if you never tried to, say, spread flour to see it, or otherwise check. Especially if you’re the one repackaging that dragon thing for popular consumption )
What SI activity constitutes the ‘spreading flour’ step in your analogy?
I’m speaking of feedbacks Holden told of. In that case, the belief in own capabilities is the dragon.
Yes, I understand the analogy and how it applies to SI, except the ‘spreading flour’ step where they test them. What actions should they take to perform the test?
Well, for example, Eliezer can try to actually invent something technical, most likely fail (most people aren’t very good at inventing), and then cut down his confidence in his predictions about AI. (and especially in intuitions because the dangerous AI is incredibly clever inventor of improvements to itself, and you’d better be a good inventor or your intuitions from internal self observation aren’t worth much). On more meta level they can sit and think—how do we make sure we aren’t mistaken about AI? Where could our intuitions be coming from? Are we doing something useful or have we created a system of irreducible abstractions? etc. Should have been done well before Holden’s post.
edit: i.e. essentially, SI is doing a lot of symbol manipulation type activity to try to think about AI. Those symbols may represent some irreducible flawed concepts, in which case manipulating them won’t be of any use.