Even the “dopamine drip” argument does not make that assumption, even if some ways of presenting it do.
Loosemore hasn’t designed actually-intelligent AIs, any more than Yudkowsky has. In fact, I don’t see any sign that he’s designed any sort of AIs any more than Yudkowsky has. Both of them are armchair theorists with abstract ideas about how AI ought or ought not to work. Am I missing something? Has Loosemore produced any actual things that could reasonably be called AIs?
No one was dismissing Loosemore’s argument without addressing it. Yudkowsky dismissed Loosemore having argued with him about AI for years.
I don’t know what your last paragraph means. I mean, connotationally it’s clear enough: it means “boo, Yudkowsky and his pals are dilettantes who don’t know anything and haven’t done anything valuable”. But beyond that I can’t make enough sense of it to engage with it.
“If pure armchair reasoning works …”—what does that actually mean? Any sort of reasoning can work or not work. Reasoning that’s done from an armchair (so to speak) has some characteristic failure modes, but it doesn’t always fail.
“Why would it work?”—what does that actually mean? It works if Yudkowsky’s argument is sound. You can’t tell that by looking at whether he’s sitting in an armchair; it depends on whether its (explicit and implicit) premises are true and whether the logic holds; Loosemore says there’s an implicit premise along the lines of “AI systems will have such-and-such structure” which is false; I say no one really knows much about the structure of actual human-level-or-better AI because no one is close to building one yet, I don’t see where Yudkowsky’s argument actually assumes what Loosemore says it does, and Loosemore’s counterargument is more or less “any human-or-better AI will have to work the way I want it to work, and that’s just obvious” and it isn’t obvious.
“There’s never been a proof of that”—a proof of what, exactly? A proof that armchair reasoning works? (Again, what would that even mean? Some armchair reasoning works, some doesn’t.)
“Just a reluctance to discuss it”—seems to me there’s been a fair bit of discussion of Loosemore’s claims on LW. (Including in the very discussion where Yudkowsky called him an idiot.) And, as I understand it, there was a fair bit of discussion between Yudkowsky and Loosemore, but by the time of that discussion Yudkowsky had decided Loosemore wasn’t worth arguing with. This doesn’t look to me like a “reluctance to discuss” in any useful sense. Yudkowsky discussed Loosemore’s ideas with Loosemore for a while and got fed up of doing so. Other LW people discussed Loosemore’s ideas (with Loosemore and I think with one another) and didn’t get particularly fed up. What exactly is the problem here, other than that Yudkowsky was rude?
Even the “dopamine drip” argument does not make that assumption, even if some ways of presenting it do.
Loosemore hasn’t designed actually-intelligent AIs, any more than Yudkowsky has. In fact, I don’t see any sign that he’s designed any sort of AIs any more than Yudkowsky has. Both of them are armchair theorists with abstract ideas about how AI ought or ought not to work. Am I missing something? Has Loosemore produced any actual things that could reasonably be called AIs?
No one was dismissing Loosemore’s argument without addressing it. Yudkowsky dismissed Loosemore having argued with him about AI for years.
I don’t know what your last paragraph means. I mean, connotationally it’s clear enough: it means “boo, Yudkowsky and his pals are dilettantes who don’t know anything and haven’t done anything valuable”. But beyond that I can’t make enough sense of it to engage with it.
“If pure armchair reasoning works …”—what does that actually mean? Any sort of reasoning can work or not work. Reasoning that’s done from an armchair (so to speak) has some characteristic failure modes, but it doesn’t always fail.
“Why would it work?”—what does that actually mean? It works if Yudkowsky’s argument is sound. You can’t tell that by looking at whether he’s sitting in an armchair; it depends on whether its (explicit and implicit) premises are true and whether the logic holds; Loosemore says there’s an implicit premise along the lines of “AI systems will have such-and-such structure” which is false; I say no one really knows much about the structure of actual human-level-or-better AI because no one is close to building one yet, I don’t see where Yudkowsky’s argument actually assumes what Loosemore says it does, and Loosemore’s counterargument is more or less “any human-or-better AI will have to work the way I want it to work, and that’s just obvious” and it isn’t obvious.
“There’s never been a proof of that”—a proof of what, exactly? A proof that armchair reasoning works? (Again, what would that even mean? Some armchair reasoning works, some doesn’t.)
“Just a reluctance to discuss it”—seems to me there’s been a fair bit of discussion of Loosemore’s claims on LW. (Including in the very discussion where Yudkowsky called him an idiot.) And, as I understand it, there was a fair bit of discussion between Yudkowsky and Loosemore, but by the time of that discussion Yudkowsky had decided Loosemore wasn’t worth arguing with. This doesn’t look to me like a “reluctance to discuss” in any useful sense. Yudkowsky discussed Loosemore’s ideas with Loosemore for a while and got fed up of doing so. Other LW people discussed Loosemore’s ideas (with Loosemore and I think with one another) and didn’t get particularly fed up. What exactly is the problem here, other than that Yudkowsky was rude?