Here’s a riddle: A woman falls in love with a man at her mother’s funeral, but forgets to get contact info from him and can’t get it from any of her acquaintances. How could she find him again? The answer is to kill her father in hopes that the man would come to the funeral.
It reminds me of [security mindset](https://www.schneier.com/blog/archives/2008/03/the_security_mi_1.html), in which thinking like an attacker exposes leaky abstractions and unfounded assumptions, something that is also characteristic of being agentic and “just doing things.”
metachirality
In fact, Claude 3 Opus is still available.
Is Judaism not also based around disputation of texts?
I pretty much agree with this. I just posted this as a way of illustrating how simulacrum stages could be generalized to be more than just about signalling and language. In a way, even stocks are stage 4 since they cash out in currency, so that stuff can be one stage in one way but another stage in another way.
Simulacrum stages as various kinds of assets:
Stage 1: Most assets. Apples are valued for their taste and nutrition. Stocks of apple-farming companies are claims to profit which comes from providing apples, which are valued. Shares in prediction markets eventually cash out.
Stage 2: Assets held because the holder thinks others will, perhaps erroneously, think it has increased in stage 1 value. Includes Ponzi schemes and Enron stocks/options.
Stage 3: Manifold Market stocks and some memecoins, which are associated with a thing, person, or concept, which is used as a Schelling point to determine their value. One only expects their value to correlate with the associated concept because everyone else thinks it’ll correlate and buy and sell accordingly.
Stage 4: Cryptocurrency and assets of pure speculation, where the price rises only because everyone expects it will rise and buys accordingly, mutatis mutandis for lowering. Also includes fiat currency, which is valuable because everyone agrees to use it as a medium of exchange.
Game theory of simulacrum stages:
If everyone else is truthful, lying wins.
If everyone else is lying, not taking them seriously wins.
If everyone else is not taking anyone else seriously, there is no need to pretend.
I made a Manifold market for how many pre-orders there will be!
I’m generally confused by the notion that Buddhism entails suppressing one’s emotions. Stoicism maybe, but Buddhism?
Buddhism is about what to do if one has no option but to feel one’s emotions.
How is that mutually exclusive with (1)?
Am I correct in assuming you don’t think one should give the money in the counterfactual mugging?
I don’t know what the first part of your comment is trying to say. I agree that counterfactual mugging isn’t a thing that happens. That’s why it’s called a thought experiment.
I’m not quite sure what the last paragraph is trying to say either. It sounds somewhat similar to an counter-argument I came up with (which I think is pretty decisive), but I can’t be certain what you actually meant. In any case, there is the obvious counter-counter-argument that in the counterfactual mugging, the agent in the heads branch and the tails branch are not quite identical either, one has seen the coin land on heads and the other has seen the coin land on tails.
I came up with an argument for alignment by default.
In the counterfactual mugging scenario, a rational agent gives the money, even though they never see themselves benefitting from it. Before the coin flip, the agent would want to self-modify to give the money to maximize the expected value, therefore the only reflectively stable option is to give the money.
Now imagine instead of a coin flip, it’s being born as one of two people: Alice, who wants to not be murdered for 100 utils, and Bob, who wants to murder Alice for 1 utils. As with the counterfactual mugging, before you’re born, you’d rationally want to self-modify to not murder Alice to maximize the expected value.
What you end up with is basically morality (or at least it is the only rational choice regardless of your morality), so we should expect sufficiently intelligent agents to act morally.
Sounds like synesthesia
I fear that, while it might be a good idea to discourage LSD, it would make things even worse to discourage transitioning.
Highly Advanced Epistemology 101?
probably doesn’t change much, but janus’ claude generated comment was the first mention of claude acting like a base model on LW
It ought to be a top-level post on the EA forum as well.
Well that’s because it’s meant to be quantifying over linear equations. and are not meant to be replaced but and are.
i is often used as an index in math, similar to how it is used as an index in for loops.
Confused about the disagreements. Is it because of the AI output or just the general idea of an AI risk chatbot?