Safer than implants is to connect at scale “telepathically” leveraging only full sensory bandwidth and much better coordination arrangements. That is the ↗️ direction of the depth-breadth spectrum here.
How do you propose the hardware that does this works? I thought you needed wires to the outer regions of the brain with enough resolution to send/receive from ~1-10 target axons at a time.
Something like a lightweight version of the off-the-shelf Vision Pro will do. Just as nonverbal cues can transmit more effectively with codec avatars, post-symbolic communication can approach telepathy with good enough mental models facilitated by AI (not necessarily ASI.)
That sounds kinda like in person meetings. And you have the issue, same with those, of revealing information you didn’t intend to disclose, and the issues that happen when the parties incentives aren’t aligned.
Yes. The basic assumption (of my current day job) is that good-enough contextual integrity and continuous incentive alignment are solvable well within the slow takeoff we are currently in.
Fair. You think the takeoff is rate limited by compute, which is being produced at a slowly accelerating rate? (Nvidia has increased h100 run rate several fold, amd dropped their competitor today, etc)
Safer than implants is to connect at scale “telepathically” leveraging only full sensory bandwidth and much better coordination arrangements. That is the ↗️ direction of the depth-breadth spectrum here.
How do you propose the hardware that does this works? I thought you needed wires to the outer regions of the brain with enough resolution to send/receive from ~1-10 target axons at a time.
Something like a lightweight version of the off-the-shelf Vision Pro will do. Just as nonverbal cues can transmit more effectively with codec avatars, post-symbolic communication can approach telepathy with good enough mental models facilitated by AI (not necessarily ASI.)
That sounds kinda like in person meetings. And you have the issue, same with those, of revealing information you didn’t intend to disclose, and the issues that happen when the parties incentives aren’t aligned.
Yes. The basic assumption (of my current day job) is that good-enough contextual integrity and continuous incentive alignment are solvable well within the slow takeoff we are currently in.
Fair. You think the takeoff is rate limited by compute, which is being produced at a slowly accelerating rate? (Nvidia has increased h100 run rate several fold, amd dropped their competitor today, etc)