I would guess this is somewhat similar to having a network of friends: a polycule is even bound to be smaller. And I can totally imagine being emotionally, romantically, sexually attached to one set of partners and opinion-sharing attached to a slightly different set.
ProgramCrafter
Inner Optimization Mechanisms in Neural Nets
I believe Focus Your Uncertainty essay of Sequences touches this topic: at very least, math is useful for splitting limited amount of resources.
Testing status: I’ve only dated once, because I’m moving to other city to enter university.
The girl I have dated was quite pretty but not the most beautiful around. Luckily I learnt that she has read HP:MoR early so didn’t even try to over-hyperbole and say that she was the most beautiful—both of us would understand that it’s false—instead, I smiled at appropriate moments.
Another non-verbal sign is not to dismiss parts of dialogue. When my girlfriend suggested a few animes to watch, and I doubted I would like them, I still visibly wrote them down but avoided promising that I will actually watch them. (I ended up liking one and said so afterwards!)
I have quite specific perspective on talking, because I notice that I’m trying to understand others’ perspective and internal beliefs structure when they don’t understand something. Roughly once a month, someone of my classmates would ask a strange-looking question, and teacher would answer something similar but not the question (like “Why this approximation works?”—“There’s how you do it...”—“I’ve understood how to calculate it, but why is it the answer?”), and afterwards I try to patch the underlying beliefs structure.
Speaking of next steps, I’d love to see a transformer that was trained to manipulate those states (given target state and interactor’s tokens, would emit its own tokens for interleaving)! I believe this would look even cooler, and may be useful in detecting if AI starts to manipulate someone.
I’d say that it doesn’t carve reality at the same places as my understanding. I neither upvoted nor downvoted the post, but had to consciously remember that I have that option at all.
I think that language usage can be represented as vector, in basis of two modes:
“The Fiat”: words really have meanings, and goal of communication is to transmit information (including requests, promises, etc!),
“Non-Fiat”: you simply attempt to say a phrase that makes other people do something that furthers your goal. Like identifying with a social group (see Belief as Attire) or non-genuine promises.
(Note 1: if someone asked me what mode I commonly use, I would think. Think hard.)
(Note 2: I’ve found a whole tag about motivations which produce words—https://www.lesswrong.com/tag/simulacrum-levels! Had lost it for certain time before writing this comment.)
In life, I try to communicate less hyperboles and replace them with non-verbal signs, which do not carry implication of either “the most beautiful” or “more beautiful than everyone around”.
Maybe vehicles would need to carry some shaped charges to cut a hole in the tube in case of emergency.
That would likely create sparks, and provided the tube has been cut the hydrogen is going to explode.
*preferably not the last state but some where the person felt normal.
I believe that’s right! Though, if person can be reconstructed from N bits of information, and dead body retains K << N, then we need to save N-K bits (or maybe all N, for robustness) somewhere else.
It’s an interesting question how many bits can be inferred from social networks trace of the person, actually.
Continuing to make posts into songs! I believe I’m getting a bit better, mainly in rapid-lyrics-writing; would appreciate pointers how to improve further.
https://suno.com/song/ef734c80-bce6-4825-9906-fc226c1ea5b4 (based on post Don’t teach people how to reach the top of a hill)
https://suno.com/song/c5e21df5-4df7-4481-bbe3-d0b7c1227896 (based on post Effectively Handling Disagreements—Introducing a New Workshop)
Also, if someone is against me creating a musical form of your post, please say so! I don’t know beforehand which texts would seem easily convertible to me.
This is especially concerning if we, as good Bayesians, refuse to assign a zero probability to any event, including zero utility ones.
I feel that since people don’t care ultimately about money, all-nonzero probabilities will make all events have nonzero utility as well.
Let’s solve this problem without trying to refer to existing particular cases.
For the start, we assume that utility of A-the-thief monotonically decreases with time served; utility of B monotonically increases, and if A gives up lollipops it is increased by another constant.
Let’s graph what choices A has when B does not give up lollipops.
We may notice that in this case B will simply throw A in jail for life. Well, what happens if A is willing to cooperate a bit?
Coordination result will not be below or to the left of Pareto frontier, since otherwise it is possible to do better than that;
Coordination result will not be below or to the left of no-coordination result, since otherwise one or both parties are acting irrationally.
We may see that after these conditions, only a piece of curve “A gives up lollipops” remains. The exact bargaining point can then be found out by using ROSE values, but we can already conclude that A will likely be convicted for a long time but not for life.
The LessWrong’s AI-generated album was surprisingly nice and, even more importantly, pointed out the song generator to me! (I’ve tried to find one a year ago and failed)
So I’ve decided to try my hand on quantum mechanics sequence. Here’s what I have reached yet: https://app.suno.ai/playlist/81b44910-a9df-43ce-9160-b062e5b080f8/. (10 generated songs, 3 selected, unfortunately not the best quality)
- 18 Apr 2024 16:29 UTC; 3 points) 's comment on ProgramCrafter’s Shortform by (
I’ve decided to try my hand on quantum mechanics sequence! Here’s what I have reached yet: https://app.suno.ai/playlist/81b44910-a9df-43ce-9160-b062e5b080f8/. (10 generated songs, 3 selected, unfortunately not the best quality)
I’ve came across a poll about exchanging probability estimates with another rationalist: https://manifold.markets/1941159478/you-think-something-is-30-likely-bu?r=QW5U.
You think something is 30% likely but a friend thinks 70%. To what does that change your opinion?
I feel like there can be specially-constructed problems when the result probability is 0, but haven’t been able to construct an example. Are there any?
User-inclination-guessing algorithms: registering a goal
On what basis can Alice assume
Not actually assume, but that’s certainly Bayesian evidence (should Bob have tried , he would likely respond in another way).
Also, :smile: your own comment is a fairly large bit of evidence that you haven’t yet read the Sequences (by the way, I recommend doing that). For instance, you can consider different ways of thinking, answer the questions 1-4 from their perspectives, and that would be evidence on which way is better—though, reality is still the ultimate judge how each situation turns out.
I don’t think there is anything I can do educationally to better ensure they thrive as adults other than making sure I teach them practical/physical build and repair skills
I think one more thing could be useful, I’d call it “structural rise”: over many different spheres of society, large projects are created by combining some small parts; ways to combine them and test robustness (for programs)/stability (for organisations)/beauty (music)/etc seem pretty common for most of the areas, so I guess they can be learned separately.
I suppose 3d-whiteboard could be useful in allowing to connect more relevant subjects to each node (re: problem of four colours: countries on plane can be coloured in 4 different ways so that touching ones have different colour, and in space there’s no such limit).
the best you can do is what you think is personally good
As far as you’re an ideal optimizing agent with consistent values and full knowledge, otherwise actions based on your thoughts may end up worse than using social heuristics.
That there are no bugs when it comes to values. That you should care about exactly what you want to care about. That if you want to team up and save the world from AI or poverty or mortality, you can, but you don’t have to.
Locally invalid. Values can be terminal (what you care about) and instrumental, and saving the world for most people is actually instrumental.
There’s value in giving explicit permission to confused newcomers to not get trapped in moral chains, because it’s really easy to hurt yourself doing that.
I think that’s true since memes can be harmful, but there is also value in reminding that if more people worked to
saveimprove the world on average, it would be better, and often a simpler way is to do that yourself instead of pushing that meme+responsibility out.Save the world if you want to, but please don’t if you don’t want to.
I’d continue that with “but please don’t destroy the world whichever option you choose, since that will interfere with my terminal goals and I’ll care about your non-existence”.
I’d like to mention the explanation that ByteDance does not consider US dollars to have enough value. Given that China can’t use them to lobby cancelling sanctions, for instance, that does mean that US dollars aren’t equivalent to unspecialized optimization power for them, and might have little value.