I was just curious and wanted to give you the occasion to expand your viewpoint. I didn’t downvote your comment btw.
In what ways?
My initial reaction to their arrival was “now this is dumb”. It just felt too different from the rest, and too unlikely to be taken seriously. But in hindsight, the suddenness and unlikelihood of their arrival work well with the final twist. It’s a nice dark comedic ending, and it puts the story in a larger perspective.
I think the bigger difference between humans and chimps is the high prosocial-ness of humans. this is what allowed humans to evolve complex cultures that now bear a large part of our knowledge and intuitions. And the lack of that prosocial-ness is the biggest obstacle to teaching chimps math.
I think I already replied to this when I wrote:
I think all the methods that aim at forcing the Gatekeeper to disconnect are against the spirit of the experiment.
I just don’t see how, in a real life situation, disconnecting would equate to freeing the AI. The rule is artificially added to prevent cheap strategies from the Gatekeeper. In return, there’s nothing wrong to adding rules to prevent cheap strategies from the AI.
But econ growth does not necessarily mean better lives on average if there are also more humans to feed and shelter. In the current context, if you want more ideas, you’d have a better ROI by investing in education.
Unless humanity destroys itself first, something like Horizon Worlds will inevitably become a massive success. A digital world is better than the physical world because it lets us override the laws of physics. In a digital world, we can duplicate items at will, cover massive distances instantaneously, make crime literally impossible, and much, much more. A digital world is to the real world as Microsoft Word is to a sheet of paper. The digital version has too many advantages to count.
Either there will be limitations or not. No limitations means that you can never be sure that someone in front of you is paying attention to you; your appearance indicates nothing but your whim of the moment; you can not be useful to others by providing something that they can’t get by themselves (art? AIs can make art). My first impression is that it will be very hard to build trust and intimacy in this environment. I expect loneliness and depression to rise as this technology is adopted.
But there will probably be limitations. Except that while in our world the limitations are arbitrary, in the Metaverse they will be decided by a private company and will probably enforce a plutocratic class system.
I see a flaw in the Tuxedage ruleset. The Gatekeeper has to stay engaged throughout the experiment, but the AI doesn’t. So the AI can bore the Gatekeeper to death by replying at random intervals. If I had to stare at a blank screen for 30 minutes waiting for a reply, I would concede.
Alternatively, the AI could just drown the Gatekeeper under a flurry of insults, graphic descriptions of violent/sexual nature, vacuous gossip, or a mix of these for the whole duration of the experiment. I think all the methods that aim at forcing the Gatekeeper to disconnect are against the spirit of the experiment.
I also see that the “AI player” provides all elements of the background. But the AI can also lie. There should be a way to separate words from the AI player, when they’re establishing true facts about the setting, and words from the AI, who is allowed to lie.
I’m interested, conditional on these issues being solved.
It comes with a cultural relativism claim that a morality of a culture isn’t wrong, just conflicting to your morals. And this is also probably right.
How can this work? Cultures change. So which is morally right, the culture before the change, or the culture after the change?
I guess a reply could be “Before the change, the culture before the change is right. After the change, the culture after the change is right.” But in this view, “being morally right” carries no information. We cannot assess whether a culture deserves to be changed based on this view.
Thanks everyone :)
Initially, I was expecting a “no”, but being denied a reply is arguably a stronger rejection experience.
Finally, willy finished his makeshift guide rope and lowered it to the rescuers.
Finally, Toni finished his makeshift guide rope and lowered it to the rescuers.
So, Evie Cotrell, could you help me practice being rejected?
I can’t help but notice that if for you “nothing else could have happened than what happened”, then your definition of “could have happened” is so narrow as to become trivial.
Rather, I think that by “X could have happened in situation Y”, the laymen mean something like: even with the knowledge of hindsight, in a situation that looks identical to situation Y for the parameters that matter, I could not exclude X happening”.