I predict that—consciousness will turn out to be explainable with manageable complexity (by 2040, 90%) - the explanation will match observed behavior and allow decent predictions (like, which diseases or drugs will have an effect on it) - many people will dispute this and come up with edge cases and/or put on extra demands on what consciousness is supposed to be (but that wouldn’t change the predictivity of the theory of course) - the theory will allow engineers to build systems that are conscious in a recognizable way (by 2050, 85%) - many people will dispute this and claim those are zombies—some of the big systems will be ruled moral persons by at least a few courts (60%) - the engineers with optimize the systems allow smaller and smaller systems to be conscious in this sense to the point where they do little else beside being conscious (70%) - people will do all kinds of crazy stuff with this, maybe embed minimal such systems in devices to prevent turning off.
I predict that—consciousness will turn out to be explainable with manageable complexity (by 2040, 90%) - the explanation will match observed behavior and allow decent predictions (like, which diseases or drugs will have an effect on it) - many people will dispute this and come up with edge cases and/or put on extra demands on what consciousness is supposed to be (but that wouldn’t change the predictivity of the theory of course) - the theory will allow engineers to build systems that are conscious in a recognizable way (by 2050, 85%) - many people will dispute this and claim those are zombies—some of the big systems will be ruled moral persons by at least a few courts (60%) - the engineers with optimize the systems allow smaller and smaller systems to be conscious in this sense to the point where they do little else beside being conscious (70%) - people will do all kinds of crazy stuff with this, maybe embed minimal such systems in devices to prevent turning off.