STEM+ AI will exist by the year 2035 but not by year 2100 (human scientists will improve significantly thanks to cohering blending volition, aka ⿻Plurality).
Are you saying that STEM+ AI won’t exist in 2100 because by then human scientists will have become super good, such that the bar for STEM+ AI (“better at STEM research than the best human scientists”) will have gone up?
If this is your view it sounds extremely wild to me, it seems like humans would basically just slow the AIs down. This seems maybe plausible if this is mandated by law, i.e. “You aren’t allowed to build powerful STEM+ AI, although you are allowed to do human/AI cyborgs”.
Why do you think the pedantic medical issues (brain swelling, increased risks of various forms of early death) from brain implants will be solved pre- (ASI driven) singularity? Gene hacks exhibit the same issues. To me these problems look unsolvable in that getting from “it’s safe 90-99 percent of the time, 1% uh oh” to “it’s always safe, no matter what goes wrong we can fix it” requires superintelligent medicine, because you’re dealing with billions or more permutations of patient genetics and rare cascading events.
Safer than implants is to connect at scale “telepathically” leveraging only full sensory bandwidth and much better coordination arrangements. That is the ↗️ direction of the depth-breadth spectrum here.
How do you propose the hardware that does this works? I thought you needed wires to the outer regions of the brain with enough resolution to send/receive from ~1-10 target axons at a time.
Something like a lightweight version of the off-the-shelf Vision Pro will do. Just as nonverbal cues can transmit more effectively with codec avatars, post-symbolic communication can approach telepathy with good enough mental models facilitated by AI (not necessarily ASI.)
That sounds kinda like in person meetings. And you have the issue, same with those, of revealing information you didn’t intend to disclose, and the issues that happen when the parties incentives aren’t aligned.
Yes. The basic assumption (of my current day job) is that good-enough contextual integrity and continuous incentive alignment are solvable well within the slow takeoff we are currently in.
Fair. You think the takeoff is rate limited by compute, which is being produced at a slowly accelerating rate? (Nvidia has increased h100 run rate several fold, amd dropped their competitor today, etc)
Are you saying that STEM+ AI won’t exist in 2100 because by then human scientists will have become super good, such that the bar for STEM+ AI (“better at STEM research than the best human scientists”) will have gone up?
If this is your view it sounds extremely wild to me, it seems like humans would basically just slow the AIs down. This seems maybe plausible if this is mandated by law, i.e. “You aren’t allowed to build powerful STEM+ AI, although you are allowed to do human/AI cyborgs”.
Yes, that, and a further focus on assistive AI systems that excel at connecting humans — I believe this is a natural outcome of the original CBV idea.
Why do you think the pedantic medical issues (brain swelling, increased risks of various forms of early death) from brain implants will be solved pre- (ASI driven) singularity? Gene hacks exhibit the same issues. To me these problems look unsolvable in that getting from “it’s safe 90-99 percent of the time, 1% uh oh” to “it’s always safe, no matter what goes wrong we can fix it” requires superintelligent medicine, because you’re dealing with billions or more permutations of patient genetics and rare cascading events.
Safer than implants is to connect at scale “telepathically” leveraging only full sensory bandwidth and much better coordination arrangements. That is the ↗️ direction of the depth-breadth spectrum here.
How do you propose the hardware that does this works? I thought you needed wires to the outer regions of the brain with enough resolution to send/receive from ~1-10 target axons at a time.
Something like a lightweight version of the off-the-shelf Vision Pro will do. Just as nonverbal cues can transmit more effectively with codec avatars, post-symbolic communication can approach telepathy with good enough mental models facilitated by AI (not necessarily ASI.)
That sounds kinda like in person meetings. And you have the issue, same with those, of revealing information you didn’t intend to disclose, and the issues that happen when the parties incentives aren’t aligned.
Yes. The basic assumption (of my current day job) is that good-enough contextual integrity and continuous incentive alignment are solvable well within the slow takeoff we are currently in.
Fair. You think the takeoff is rate limited by compute, which is being produced at a slowly accelerating rate? (Nvidia has increased h100 run rate several fold, amd dropped their competitor today, etc)