Economist.
Sherrinford
If you have a source on the Roman Empire, I’d be interested. Both in just descriptions of trends and in rigorous causal analysis. I’ve heard somewhere that there was population growth-rate decline in the Roman Empire below replacement level, which doesn’t seem to fit with all the claims about the causes of population growth-rate decline I heard in my life.
With respect to what you write here and what you wrote earlier, in particular “and have solutions to some problems you wanted to solve, but could not solve them before, novel mental visualization of math novel to you, novel insights, and an entirely new set of unsolved problems for the next day, and all of your key achievements of the night surviving into subsequent days).”, it seems to me that you are describing a situation in which simultaneously there is a machine that can seemingly overcome all computational, cognitive and physical limits but that will also empower you to overcome all computational, cognitive and physical limits.
The machine completely different from all machines that humanity has invented; while for example a telescope enables us to see the surface of the moon, we do not depend on the goodwill of the telescope, and a telescope could not explore and understand the moon without us.
Maybe my imagination of such a new kind of post-singularity machine somehow leaps too far, but I just don’t see a role for you in “solving problems” in this world. The machine may give you a set of problems or exercises to solve, and maybe you can be happy when you solved them like when you complete a level of a computer game.
The other experiences you describe maybe seem like “science and philosophy on a rave/trance party”, except if you are serious about the omnipotence of the AGI, it’s probably more like reading a science book or playing with a toy lab set on a rave/trance party, because if you could come up with any new insights, the AGI would have had them a lot earlier.
So in a way, it confirms my intuition that people who are positive about AGI seem to expect a world that is similar to being on (certain) drugs all of the time. But maybe I misunderstand that.
I’m surprised about the “disagree” vote to my comment. How do you judge the truth of the cited statement based on the post. (I’d say linked webpages do not count. The webpages are not part of the article and the article does not list them as evidence.)
Thanks, mishka. That is a very interesting perspective! It does in some sense not feel “real” to me, but I’ll admit that that is possibly the case due to some bias or limited imagination on my side.
However, I’d also still be interested in your answer to the questions “How do you prepare? What do you expect your typical day to be like in 2050?”
repeatedly caught publishing false information, conspiracy theories and hoaxes, [undue weight] for opinions
So, is this true or not? I cannot judge this based on your post.
That sure is often the case, but not always, and therefore it may count as a bit of evidence but not extremely much. Otherwise it would be very easy to automatically delete bots on Twitter and similar platforms.
That’s interesting, because
b) Wouldn’t an LLM let it end in a rhyme exactly because that is what a user would expect it to do? Therefore, I thought not letting it end in a rhyme is like saying “don’t annoy me, now I am going to make fun of you!”
a) If my reading of b) is correct, then the account DID poke fun at the other user.
So, in a way, your reply confirms my rabbit/duck interpretation of the situation, and I assume people will have many more rabbit/duck situations in the future.
Of course you are right that the account suspension is evidence.
Noah Smith writes about
“1) AI flooding social media with slop, and 2) foreign governments flooding English-language social media with disinformation. Well, if you take a look at the screenshot at the top of this post, you’ll see the intersection of the two!”
Check the screenshot in his post and tell me whether you see a rabbit or a duck.
I see a person called A. Mason writing on Twitter and ironically subverting the assumption that she is a bot, by answering with the requested poem but letting it end with a sentence about Biden that confirms her original statement and doesn’t rhyme.
Of course, this could also be an AI being so smart that it can create exactly that impression. This would be the start of the disintegration of social reality.
[Question] Pondering how good or bad things will be in the AGI future
Thanks! I thought the previously usual sorting was not just “latest” but also took a post’s karma into account. I probably misunderstood that.
Can I somehow get the old sorting algorithm for posts back? My lesswrong homepage is flooded with very old posts.
I wondee whether more people from those areas take part in the survey. They can assume that there are many people from the same area and often same age and same jobs, which implies that they can be sure their entries will remain anonymous.
I assume we are either lost in translation (which means I cannot phrase my thoughts clearly or am unable to put myself in your shoes) or you do not want to think about the question for some reason. I think I have to give up here. Nonetheless, thank you very much for the answer.
Of course, preferences are shaped by your social environment, but I assume that in any given situation you could still state a preference on the basis of which you would then enter into an exchange with the other relevant people?
Thanks. I don’t understand the sentence “Note that my comments about my responses to this probability are different from actual responses to having a baby because the scenrio is very differenz.” Would you be willing to elaborate?
This comment is just to note that I’d still be happy about an answer.
While I don’t have much to elaborate, maybe the following headline captures the relevant mood: https://unchartedterritories.tomaspueyo.com/p/what-would-you-do-if-you-had-8-years
I did not intend the word ‘prepper’ to be detogatory, but to be a word for ‘classical’ preparedness skills.
While I understand your risk assessment and it may be true that increasing societal risk makes such prepper skills more valuable, I think it neglects the problem that ‘digital’ skills, both for job qualifications and for disaster situations, may also become more valuable than before. As time is still only 24 hours a day, it is not clear how the ‘life preparedness curriculum’ composition should be different compared to, for example, growing up 20 years ago.
I try to summarize your position:
You think that with a relevant probability, major catastrophic events will happen that lead to situations in which traditional non-digital “prepper” skills are relevant,
and therefore, parents or families should invest a larger share of their own and their children’s time and resources into learning such skills,
compared to a world that was not “on the eve of AI”.
Right?
I don’t understand your point, is it:
a) Life always ends with death, and many people believe that if their life ends with death they don’t want to live at all or
b) Giving birth always gives “joy to yourself and the newborn” while also causing “suffering of other newborns”. (If so, why?)