Personal website: https://outsidetheasylum.blog/ Feedback about me: https://www.admonymous.co/isaacking
Isaac King(Isaac King)
Can DALL-E understand simple geometry?
Ah, so mortality almost always trends downwards except when it jumps species, at which point there can be a discontinuous jump upwards. That makes sense, thank you.
Why is it assumed that diseases evolve towards lower mortality? Every new disease is an evolved form of an old disease, so if that trend were true we’d expect no disease to ever have noticeable mortality.
Judging by a quick look at Twitter, this is going to be politically polarized right off the bat, with large swaths of the population immediately refusing vaccines or NPIs. So I think whether this turns into a serious pandemic is going to depend largely on the infectiousness of Monkeypox and not all that much else.
I don’t think that’s what’s happening in the situations I’m thinking about, but I’m not sure. Do you have an example dialogue that demonstrates someone taking a belief literally when it obviously wasn’t intended that that way?
Do you think that conveying my motivation for the question would significantly lower the frequency of miscommunications? If so, why?
I tend to avoid that kind of thing because I don’t want it to bias the response. If I explain my motivations, then their response is more likely to be one that’s trying to affect my behavior rather than convey the most accurate answer. I don’t want to be manipulated in that way, so I try to ask question that people are more likely to answer literally.
From the “interpretation” section of the link I provided:
Truthfulness should be the absolute norm for those who trust in Christ. Our simple yes or no should be completely binding since deception is never an option for us. If an oath is required to convince someone of our honesty or intent to be faithful, it suggests we may not be known for telling the truth in other circumstances.
It’s likely that the taking of oaths had become a way of manipulating people or allowing wiggle room to get out of some kinds of contracts. James is definite: For those in Christ, dishonesty is never an option.
[Question] Looking for someone to run an online seminar on human learning
My Approach to Non-Literal Communication
I travel frequently for my job, and spend >50% of my time away from home. Can any of the existing cryonics organizations handle someone who has about an equal chance of dying in any of the ~200 largest cities in the US and Canada?
What’s the conceptual difference between “running a search” and “applying a bunch of rules”? Whatever rules the cat AI is applying to the image must be implemented by some step-by-step algorithm, and it seems to me like that could probably be represented as running a search over some space. Similarly, you could abstract away the step-by-step understanding of how breadth-first search works and say that the maze AI is applying the rule of “return the shortest path to the red door”.
How could an algorithm know Bob’s hypothesis is more complex?
I think this is supposed to be Alice’s hypothesis?
I’m having trouble understanding how the maze example is different from the cat example. The maze AI was trained on a set of mazes that had a red door along the shortest path, so it learned to go to those red doors. When it was deployed on a different set of mazes, the goal it had learned didn’t match up with the goal its programmers wanted it to have. This seems like the same type of out-of-distribution behavior that you illustrated with the AI that learned to look for white animals rather than cats.
You presented the maze AI as different from the cat AI because it had an outer goal of “find the shortest path through the maze” and implemented that goal by iterating the inner goal of “breadth-first search for a red door”. The inner goal is aligned with the outer goal for all training mazes, but not for the real mazes. But couldn’t you frame the cat AI the same way? Maybe it has an outer goal of “check for a cat” and it implements that with an inner goal of “divide the image into a set of shapes that each contain only colors within [margin] of the average color. If there is at least one shape that’s within [margin] of white and has [shape] return yes, otherwise return no.”
How is the maze AI fundamentally different from the cat AI? Why is the inner/outer alignment model of thinking about an AI system more useful than thinking about it as a single optimizer that was trained on a flawed distribution?
it might contain over 101000000 candidates
This seems like an oddly specific number; is it supposed to be ?
If so, why is it such a small space? If the model accepts 24-bit, 1000x1000 pixel images and has to label them all as “cat” or “no cat”, there should be possible models.
I don’t know if this answers your question, but they have a technical guide here.
I didn’t know this was a thing. Is there a post that explains why it isn’t turned on by default? I looked around but couldn’t find anything about agreement voting from less than 10 years ago, and none of those directly addressed that question anyway.
And are there any other types of voting that are turned off by default?
While friendly competition can be good in many contexts, I don’t think this is one of them. The holiday is about a dedicated team who were willing to die together for their cause. I don’t think competing to see who can go the longest without food would really be in the spirit of the holiday. I suspect it would also lead to bad feeling, having to police for cheating, etc.
The framing wasn’t an intentional choice, I wasn’t considering that aspect when I made the comment. I haven’t been privy to any of the off-LW conflict about it, so it wasn’t something that I was primed to look out for. I am not suggesting that there should be a community-wide standard (or that there shouldn’t be). I intended it as “here’s an idea that people may find interesting.”
Thoughts on having part of the holiday be “have tasty food easily accessible (perhaps within sight range) during the fast”?
Pros:
It’s in keeping with the original story.
It can help us see the dangers of having instant gratification available, and let us practice our ability to resist short-term urges for long-term benefits.
If the goal of rationalist holidays is to help us feel like our own community, then this could help people feel more “special”. Many religions have holidays that call for a fast, but as far as I know none of them expect one to tempt themselves.
Cons:
It makes the fast harder. If people are used to their self-control strategy being “don’t tempt myself”, this will be new to them, and if they end up breaking their fast, they’d likely feel demoralized.
I don’t find the argument you provide for this point at all compelling; your example mechanism relies entirely on human infrastructure! Stick an AGI with a visual and audio display in the middle of the wilderness with no humans around and I wouldn’t expect it to be able to do anything meaningful with the animals that wander by before it breaks down. Let alone interstellar space.