And when your surpassing creations find the answers you asked for, you can’t understand their analysis and you can’t verify their answers. You have to take their word on faith —-
—- Or you use information theory to flatten it for you, to squash the tesseract into two dimensions and the Klein bottle into three, to simplify reality and pray to whatever Gods survived the millennium that your honorable twisting of the truth hasn’t ruptured any of its load-bearing pylons. …
I’ve never convinced myself that we made the right choice. I can cite the usual justifications in my sleep, talk endlessly about the rotational topology of information and the irrelevance of semantic comprehension. But after all the words, I’m still not sure. I don’t know if anyone else is, either. Maybe it’s just some grand consensual con, marks and players all in league. We won’t admit that our creations are beyond us...
Maybe the Singularity happened years ago. We just don’t want to admit we were left behind.
-- Siri Keeton explains what a “synthesist” does in Blindsight by Peter Watts, page 35-37
Blindsight is an amazingly Less Wrong book, with much discussion of epistemology and cognitive failures, starting with the title of the book. It is some of the hardest science fiction in existence, with a 22-page “Notes and References” section walking through 144 citations for the underlying science.
Pushing a related quote to a comment…
Pushing discussion to another comment...
This being Less Wrong, this might be the point where you bring up whether P=NP and that solutions are often much easier to verify than compute. Easier does not necessarily mean easy or even within human cognitive capabilities. And if it does in whatever example comes to mind, just keep pushing to harder problems until we need not only tools to solve the problem but also meta-tools to tell us what our tools are telling us. And you can keep pushing that meta. (Did I mention that Blindsight is a very Less Wrong book?)
We trust our tools because we trust the process we used to develop our tools, and we trust the previous generation of tools used to develop those tools and processes, and we trust… At some point, you look at the edifice of knowledge and realize your life depends on a lot of interdependencies, and that can be scary.
And then I trust Google Maps to get me most places, because I know it has a much better direction sense than me and it knows things like construction and traffic conditions.
“If you could second-guess a vampire, you wouldn’t need a vampire.”
-- an aphorism in Blindsight by Peter Watts, page 227
In Blindsight, a “vampire” is a predatory, sociopathic genius built through genetic engineering. They have human brain mass but use it differently; take all the brain power we spend on self-awareness and channel it towards more processing power. The mission leader in Blindsight is a vampire, because he is more intelligent and able to make dispassionate decisions, but how do you check whether your vampire is right or even still on your side? Like Quirrelmort, they are always playing at least one level higher than you.
The synthesist quote is the first time Blindsight brings up the problem of what to do when you build smarter-than-human AI. The vampire quote approaches it from a different angle, with a smarter-than-human biological AI. Vampires present a trade-off: they cannot rewrite their source code, so they cannot have a hard takeoff, but you know they are less than friendly AI.
(If you know what is wrong with the above, please ROT13 your spoilers.)
Blindsight is an amazingly Less Wrong book, with much discussion of epistemology and cognitive failures, starting with the title of the book. It is some of the hardest science fiction in existence, with a 22-page “Notes and References” section walking through 144 citations for the underlying science.
Pushing a related quote to a comment… Pushing discussion to another comment...
This being Less Wrong, this might be the point where you bring up whether P=NP and that solutions are often much easier to verify than compute. Easier does not necessarily mean easy or even within human cognitive capabilities. And if it does in whatever example comes to mind, just keep pushing to harder problems until we need not only tools to solve the problem but also meta-tools to tell us what our tools are telling us. And you can keep pushing that meta. (Did I mention that Blindsight is a very Less Wrong book?)
We trust our tools because we trust the process we used to develop our tools, and we trust the previous generation of tools used to develop those tools and processes, and we trust… At some point, you look at the edifice of knowledge and realize your life depends on a lot of interdependencies, and that can be scary.
And then I trust Google Maps to get me most places, because I know it has a much better direction sense than me and it knows things like construction and traffic conditions.
In Blindsight, a “vampire” is a predatory, sociopathic genius built through genetic engineering. They have human brain mass but use it differently; take all the brain power we spend on self-awareness and channel it towards more processing power. The mission leader in Blindsight is a vampire, because he is more intelligent and able to make dispassionate decisions, but how do you check whether your vampire is right or even still on your side? Like Quirrelmort, they are always playing at least one level higher than you.
The synthesist quote is the first time Blindsight brings up the problem of what to do when you build smarter-than-human AI. The vampire quote approaches it from a different angle, with a smarter-than-human biological AI. Vampires present a trade-off: they cannot rewrite their source code, so they cannot have a hard takeoff, but you know they are less than friendly AI.
(If you know what is wrong with the above, please ROT13 your spoilers.)