I think the key points (or misunderstandings) of the post can be seen in these quotes:
OK, so what about connecting an IBM Watson like understanding of the world to a Roomba or a Baxter? No one is really trying as the technical difficulties are enormous, poorly understood, and the benefits are not yet known.
and
Expecting more computation to just magically get to intentional intelligences, who understand the world is similarly unlikely. And, there is a further category error that we may be making here. That is the intellectual shortcut that says computation and brains are the same thing. Maybe, but perhaps not.
Which seem to indicate that Brooks doesn’t look past ‘linear’ scaling and sees composition effects as far away.
I say relax everybody. If we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools. And they probably won’t really be aware of us in any serious way. Worrying about AI that will be intentionally evil to us is pure fear mongering. And an immense waste of time.
Apparently he extrapolates his own specialy into the future.
I think the key points (or misunderstandings) of the post can be seen in these quotes:
and
Which seem to indicate that Brooks doesn’t look past ‘linear’ scaling and sees composition effects as far away.
Apparently he extrapolates his own specialy into the future.