Humans may create human-level artificial intelligence in this century.1
This sentence says so little. “Many experts believe humans may create human-level artificial intelligence as early as this century.”
Shortly thereafter, we may see an “intelligence explosion” — a chain of events by which human-level AI
Consider defining “AI” here, at its first usage.
As Chalmers (2010) notes, the singularity is of great practical interest:
This is the first time in this snippet that a singularity is mentioned, so it should be defined (unless the entire book is about the subject, etc.). Also, “a singularity” sounds better than “the singularity.”
“As Chalmers (2010) notes, a singularity caused by an intelligence explosion would be of great practical interest:
“the singularity is of great practical interest,” is dark arts, “the singularity would be of great practical interest,” “‘the singularity’ is of great practical interest.”
long-standing philosophical puzzles
This is dark arts. “Puzzles” implicitly have solutions, not that I don’t believe these things do.
“In science, the development of AI will require progress in several of mankind’s grandest scientific projects, including reverse-engineering the brain (Schierwagen 2011)”
No one knows what science doesn’t know. Maybe it won’t. This is well put: “while the development of AI safety mechanisms may require progress...”
“In science, experts believe the development of AI would require progress in several of mankind’s grandest scientific projects, including reverse-engineering the brain (Schierwagen 2011)”
and developing artificial minds (Nilsson 2010)
This seems like a tautology. What is meant specifically here?
and the cognitive science of human values
or the cognitive science of human values
most science would be done
This is a complicated, unspecific claim. Would it be correct to say “most design of scientific experiments,” or something specific?
Our aim, then, is not to provide detailed arguments but only to sketch the issues involved, pointing the reader to authors who have analyzed each component in more detail.
Unwieldy.
our discussion of them must be permitted to begin
“our discussion of them begins”
Because the singularity is associated
“Because ‘the singularity’ is associated”
In doing so,
“Were we to do so,”
For example
It seems like “example” refers back to the fallacy, as if you were giving an example of committing the fallacy.
“Therefore, among other things”
rather than decelerate
By not saying “rather than decelerate or remain unchanged,” or similar, you weakly imply you are assuming it will decelerate. It’s not clear that if this is so, it is to invoke LCPW on yourself.
is one of them.
“is one such convergent outcome.”
are not objections to the singularity.
“a singularity.”
human-level AI (hereafter, “AI”)
“human-level general artificial intelligence (hereafter, ‘AI’)”
consider which actions we can take now to shape our future.
I don’t like this because of its literal meaninglessness. I can see why others might not mind. Most actions shape the relevant future in some way, what you really mean is “consider which actions we should take now to shape our future,” or “consider how actions taken now would shape our future.”
concerning whether we should expect the creation of AI within a century.
“concerning when we should expect the first creation of AI.”
is not an AI but merely a “narrow AI”
“is not an AI but merely a ‘narrow artificial intelligence,’” recall “human-level AI (hereafter, ‘AI’)”
By “AI,” we refer to
There should probably be only one defining sentence. At the very least this should be changed to “By ‘human-level artificial intelligence,’ we refer to,” since you already defined AI in terms of human-level artificial intelligence.
There are many types of AI.
“There are many different proposed methods of creating AIs.”
Because whole brain emulation will rely mostly on scaling up existing technologies like microscopy and large-scale cortical simulation, WBE may be largely an “engineering” problem, and thus more predictable than other kinds of AI.
Only the date after which it can be expected to have occurred without new technologies is much more predictable. The earliest it can happen isn’t too much more so.
“Because whole brain emulation seems possible by scaling up existing technologies like microscopy and large-scale cortical simulation, WBE may be largely an “engineering” problem, and thus the latest it can be expected to have occurred by may be more predictable than the latest other kinds of AI can be expected to have occurred by.”
We do not know what it takes to build de novo AI. Because of this, we do not know what groundwork will be needed to understand general intelligence
This is only true for transparent AI. Recall we may create opaque AI before understanding general intelligence; if we did we still might not know what groundwork would be needed.
overconfident of our predictions
“overconfident in” is more common and sounds slightly better.
So if you have a gut feeling about when digital intelligence will arrive, it is probably wrong.
I don’t like this sentence. Perhaps:
“In line with these studies, experts in fields related to AI disagree about when it might first be created [citations]. So if you have a strong opinion, even an expert and informed one, about when digital intelligence will or won’t arrive before, bear in mind that most people with similar expertise and conviction are wrong.”
Should you plan as though...confident...Or is your estimate
These are not parallel. “Estimate” and “confident” are usually used to mean a raw, system 1 mental output.
You either will or will not...encourage WBE development...The outcomes of these choices will depend, among other things, on whether AI is created in the near future.
It is confusing to discuss the outcome of encouraging something as depending on whether or not it is created. Perhaps something like: “The future value of your current choices will depend, among other things, on whether or not AI is created in the near future. In the face of uncertainty, you must choose whether or not to encourage WBE development, support AI risk reduction, etc. Even personal decisions, such as how much to save for retirement, depend on what you expect the future will look like.”
If we can’t use our intuitions for prediction or defer to experts how might one estimate the time until AI?
You went from “we” to “one” within that sentence.
~~~~~~~~~~~~~~~
Why the focus on this century? Is this century a theme uniting the book this will be a chapter in?
This sentence says so little. “Many experts believe humans may create human-level artificial intelligence as early as this century.”
Consider defining “AI” here, at its first usage.
This is the first time in this snippet that a singularity is mentioned, so it should be defined (unless the entire book is about the subject, etc.). Also, “a singularity” sounds better than “the singularity.”
“As Chalmers (2010) notes, a singularity caused by an intelligence explosion would be of great practical interest:
“the singularity is of great practical interest,” is dark arts, “the singularity would be of great practical interest,” “‘the singularity’ is of great practical interest.”
This is dark arts. “Puzzles” implicitly have solutions, not that I don’t believe these things do.
“In science, the development of AI will require progress in several of mankind’s grandest scientific projects, including reverse-engineering the brain (Schierwagen 2011)”
No one knows what science doesn’t know. Maybe it won’t. This is well put: “while the development of AI safety mechanisms may require progress...”
“In science, experts believe the development of AI would require progress in several of mankind’s grandest scientific projects, including reverse-engineering the brain (Schierwagen 2011)”
This seems like a tautology. What is meant specifically here?
or the cognitive science of human values
This is a complicated, unspecific claim. Would it be correct to say “most design of scientific experiments,” or something specific?
Unwieldy.
“our discussion of them begins”
“Because ‘the singularity’ is associated”
“Were we to do so,”
It seems like “example” refers back to the fallacy, as if you were giving an example of committing the fallacy.
“Therefore, among other things”
By not saying “rather than decelerate or remain unchanged,” or similar, you weakly imply you are assuming it will decelerate. It’s not clear that if this is so, it is to invoke LCPW on yourself.
“is one such convergent outcome.”
“a singularity.”
“human-level general artificial intelligence (hereafter, ‘AI’)”
I don’t like this because of its literal meaninglessness. I can see why others might not mind. Most actions shape the relevant future in some way, what you really mean is “consider which actions we should take now to shape our future,” or “consider how actions taken now would shape our future.”
“concerning when we should expect the first creation of AI.”
“is not an AI but merely a ‘narrow artificial intelligence,’” recall “human-level AI (hereafter, ‘AI’)”
There should probably be only one defining sentence. At the very least this should be changed to “By ‘human-level artificial intelligence,’ we refer to,” since you already defined AI in terms of human-level artificial intelligence.
“There are many different proposed methods of creating AIs.”
Only the date after which it can be expected to have occurred without new technologies is much more predictable. The earliest it can happen isn’t too much more so.
“Because whole brain emulation seems possible by scaling up existing technologies like microscopy and large-scale cortical simulation, WBE may be largely an “engineering” problem, and thus the latest it can be expected to have occurred by may be more predictable than the latest other kinds of AI can be expected to have occurred by.”
This is only true for transparent AI. Recall we may create opaque AI before understanding general intelligence; if we did we still might not know what groundwork would be needed.
“overconfident in” is more common and sounds slightly better.
I don’t like this sentence. Perhaps:
“In line with these studies, experts in fields related to AI disagree about when it might first be created [citations]. So if you have a strong opinion, even an expert and informed one, about when digital intelligence will or won’t arrive before, bear in mind that most people with similar expertise and conviction are wrong.”
These are not parallel. “Estimate” and “confident” are usually used to mean a raw, system 1 mental output.
It is confusing to discuss the outcome of encouraging something as depending on whether or not it is created. Perhaps something like: “The future value of your current choices will depend, among other things, on whether or not AI is created in the near future. In the face of uncertainty, you must choose whether or not to encourage WBE development, support AI risk reduction, etc. Even personal decisions, such as how much to save for retirement, depend on what you expect the future will look like.”
You went from “we” to “one” within that sentence. ~~~~~~~~~~~~~~~
Why the focus on this century? Is this century a theme uniting the book this will be a chapter in?