I have found Haase’s thesis online. Would it be irresponsible of me to post the link here? (It is not actually hard to find.)
ETA: How concerned should we be that DARPA is going full steam ahead for strong AI? Perhaps not very much, given the failure of at least two of their projects along these lines:
There are a number of DARPA and IARPA projects we pay attention to, but I’d largely agree that their approaches and basic organization makes them much less worrying.
They tend towards large, bureaucratically hamstrung projects, like PAL, which the last time I looked included work and funding for teams at seven different universities, or they suffer from extreme narrow focus, like their intelligent communication initiatives, which went from being about adaptive routing via deep introspection of multimedia communication and intelligent networks, to just being software radios and error correction.
They’re worth keeping any eye on mostly because they have the money to fund any number of approaches, and often in long periods. But the biggest danger isn’t their funded, stated goals, it’s the possibility of someone going off-target, and working on generic AI in the hopes of increasing their funding or scope in the next evaluation, which could be a year or more later.
I have found Haase’s thesis online. Would it be irresponsible of me to post the link here? (It is not actually hard to find.)
ETA: How concerned should we be that DARPA is going full steam ahead for strong AI? Perhaps not very much, given the failure of at least two of their projects along these lines:
High Yield Cognitive Systems. The Wikipedia article (itself defunct) includes the grandiose claim that it failed because human-level AI was not ambitious enough.
Biologically-Inspired Cognitive Architectures. Abandoned.
Physical intelligence. Current.
There are a number of DARPA and IARPA projects we pay attention to, but I’d largely agree that their approaches and basic organization makes them much less worrying.
They tend towards large, bureaucratically hamstrung projects, like PAL, which the last time I looked included work and funding for teams at seven different universities, or they suffer from extreme narrow focus, like their intelligent communication initiatives, which went from being about adaptive routing via deep introspection of multimedia communication and intelligent networks, to just being software radios and error correction.
They’re worth keeping any eye on mostly because they have the money to fund any number of approaches, and often in long periods. But the biggest danger isn’t their funded, stated goals, it’s the possibility of someone going off-target, and working on generic AI in the hopes of increasing their funding or scope in the next evaluation, which could be a year or more later.