Economist.
Sherrinford
Thanks for giving a useful example.
For most people I guess it would be better to delete the phrase “I’m such a fool” from the evaluation, in order to avoid self-blame that becomes a self-image.
The “Snake cult of consciousness” theory sounds extremely fascinating. Qt the same time, it also sounds like the explanations why the pyramids were built by aliens. For laypeople, it is hard to distinguish between Important insights and clever nonsense.
Thank you very much. Why would liability for harms caused by AIs discourage the publishing of the weights of the most powerful models?
Okay, maybe I should rephrase my question: What is the typical AI safety policy they would enact if they could advise president, parliament and other real-world institutions?
https://laneless.substack.com/p/the-copenhagen-interpretation-of-ethics Isn’t this the substack of the original author?
By now there are several AI policy organizations. However, I am unsure what the typical AI safety policy is that any of them would enforce if they had unlimited power. Is there a summary of that?
The Underreaction to OpenAI
I don’t really understand why Substack became so popular, compared to eg WordPress. Is Substack writing easier to monetize?
So your timelines are the same as in 2018?
Thanks for the article recommendations.
Did you take such things into account when you made the decision, or decisions?
Almost all the blogs in the world seem to have switched to Substack, so I’m wondering if I’m the only one whose browser is very slow in loading and displaying comments from Substack blogs. Or is this a firefox problem?
I think the “stable totalitarianism” scenario is less science-fiction than the annihilation scenario, because you only need an extremely totalitarian state (something that already exists or existed) enhanced by AI. It is possible that this would come along with random torture. This would be possible with a misguided AI as well.
I don’t fully understand your implicatioks of why unpredictable things should not be frightening. In general, there is a difference between understanding and creating. The weather is unpredictable but we did not create it; where we did and do create it, we indeed seem to be too careless. For human brains, we at least know that preferences are mostly not too crazy, and if they are, capabilities are not superhuman. With respect to the immune system, understanding may be not very deep, but intervention is mostly limited by understanding, and where that is not true, we may be in trouble.
Do you think there could be an amount of suffering at the end of of a life that would outweigh 20 good years? (Including that this end could take very long.)
Thanks. What are the things that AI will, in 10, 20 or 30 years, have “trouble with”, and want are the “relevant skills” to train your kids in?
The post’s starting point is “how fast AI is advancing and all the uncertainty associated with that (unemployment, potential international conflict, x-risk, etc.)”. You don’t need concrete high-p-of-doom timelines for that, or even expect AGI at all. It is not necessary for “potential international conflict”, for example.
Could you please briefly describe the median future you expect?
A minor point regarding the EU’s institutions:
The European Parliament does not have “population-proportional membership from each country”, but: “the seats are distributed according to “degressive proportionality”, i.e., the larger the state, the more citizens are represented per MEP. As a result, Maltese and Luxembourgish voters have roughly 10x more influence per voter than citizens of the six largest countries.” (https://en.wikipedia.org/wiki/European_Parliament)
The Council of the EU does not have “one vote per country”, but its rules usually prescribe a more complicated majority rule and sometimes unanimity.
I completed the survey!
I’d still like to ask those questions (or a similar set of questions) somewhere. If someone has an idea where and how that could make sense, feel free to answer that as a comment to mine.
What is that reason you are referring to?