I don’t think they passed it in a full sense. Before LLM, there was a 5 minute Turing test, and some chatbots were passing it. I think 5 minutes is not enough. I bet that if you give me 10 hours, any currently existing LLM and human, we will communicate only via text, I will be able to figure out who is who (if both will try hard to persuade in their humanity). I don’t think LLM can come up yet with a consistent non-contradicting life story. It would be an interesting experiment :)
Valentin2026
Would you mean similarity on the outer level (e.g. Turing test) or at inner (e.g. neural network structure should resemble brain structure?
If the first—would it mean that when AI passes Turing test it would be sentient?
If the second—what are the criteria for similarity? Full brain emulation or something less complicated?
Are you working with SOTA model? Here, mathematicians report a quite different story https://www.scientificamerican.com/article/inside-the-secret-meeting-where-mathematicians-struggled-to-outsmart-ai/
I guess “good at” was improper wording. I did not mean that they do not produce nonsense. I meant that sometimes they can produce a correct solution. It is like the person may be not fit for running 100 meters in 10 seconds every day, but even if they do it in 5% of cases this is already impressive, and shows that it is possible in principle. And I guess “Ph.D. level” sounded like they can write a Ph.D. thesis from scratch. I just meant that there are short well-formulated problems that would require Ph.D. student a few hours, if not few days, which current LLM can solve in non negligible number of cases.
Can you expand your argument why LLM will not reach AGI? Like, what exactly is the fundamental obstacle they will never pass? So far they successfully doing longer and longer (for humans ) tasks https://benjamintodd.substack.com/p/the-most-important-graph-in-ai-right
I neither can see why in a few generations LLM won’t be able to run a company, as you suggested. Moreover, I don’t see why it is necessary to get to AGI. LLM are already good at solving complicated, Ph.D. level mathematical problems, which improves. Essentially, we just need an LLM version of AI researcher. To create ASI you don’t need a billion of Sam Altmans, you need a billion of Ilya Sutskevers. Is there any reason to assume LLM will never be able to become an excellent AI researcher?
I agree, they have a really bad life, but Eliezer seems to talk here about those who work 60 hours/week to ensure their kids will go to a good school. Slightly different problem.
And on homeless people, there are different cases. In some UBI indeed will help. But, unfortunately, in many cases the person has mental health problems or addiction, and simply giving them money may not help.
I feel that one of the key elements of the problem is misplaced anxiety. If the ancient farmer stops working hard he will not not get enough food. So all his family will be dead. In modern Western society, the risk of being dead from not working is nearly zero. (You are way more likely to die from exhausting yourself and working too hard). When someone works too hard, usually it is not fear of dying too earlier, or that kids will die. It is a fear of failure, being the underdog, not doing what you are supposed to, and plenty of other constructs that ancient people simply did not reach—first they needed to survive. In this sense, we are way better than even one hundred years ago.
Can UBI eliminate this fear? Maybe partially it can help, but people will still likely work hard to preserve their future and the future of their children. Maybe making psychotherapy (t address the fear itself) more available for those with low income is a better solution. I understand that it would require training way more specialists than we have now. However, some people report a benefit from talking with GPT as a therapist https://x.com/Kat__Woods/status/1644021980948201473 , maybe it can help.
What is the application deadline? I did not find it in the post. Thank you!
Yes, absolutely! We will open the application for mentee later
So far nothing, was distracted by other stuff in my life. Yes, let’s chat! frombranestobrains@gmail.com
After the rest of the USA is destroyed the very unstable situation (especially taking into account how many people have guns) is quite likely. In my opinion countries (and remote parts of countries) that will not be under attack at all are much better
Thank you for your research! First of all, I don’t expect the non-human parameter to give a clear power-law, since we need to add humans as well. Of course, close to singularity the impact of humans will be very small, but maybe we are not that close yet. Now for the details:
Compute:
1. Yes, Moore’s law was a quite steady exponential for quite a while, but we indeed should multiply it.
2. The graph shows just a five years period, and not the number of chips produced, but revenue. The five years period is too small for any conclusions, and I am not sure that fluctuations in revenue are not driven mainly by market price rather than by produced amount.
Data storage:
Yes, I saw that one before, seems more like they just draw a nice picture rather than real data.
General remarks:
I agree with the point that AGI appearance can be sufficiently random. I can see two mechanisms that potentially may make it less random. First, we may need a lot of computational resources, data storage etc. to create it, and as a lab or company reaches the threshold, it happens easily with already existing algorithms. Second, we may need a lot of digitalized data to train AGI, so the transition again happens only as we have that much data.
Lastly, notice that cthe reation of AGI is not a singularity in a mathematical sense yet. It will certainly accelerate our progress, but not to infinity, so if the data will predict for example singularity in 2030, it will likely mean AGI earlier than that.
How trustworthy would this prediction be? Depends on the amount of data and noise. If we have just 10-20 datapoints scattered all around the graph, so you can connect the dots in any way you like—not really. If, instead, we are lucky and the control parameter happened to be something easily measurable (something such that you can get just-in-time statistics, like the number of papers on arXiv right now, so we can get really a lot of data points) and the parameter continues to change as theory predicts—it would be a quite strong argument for the timeline.
It is not very likely that the control parameter will be that easily measurable and will obey power-law that good. I think it is a very high risk—very high gain project (very high gain, because if the prediction will be very clear it will be possible to persuade more people that the problem is important).
You are making a good point. Indeed, the system that would reward authors and experts will be quite complicated, so I was thinking about it on a purely volunteering basis (so in the initial stages it is non-profit). Then, if the group of people willing to work on the project was formed, they may turn it into a business project. If the initial author of the idea is in the project, he may get something, otherwise, no—the idea is already donated, no donations back. I will make an update to the initial post to clarify this point.
As to your idea, I am totally not an expert in this field. Hopefully, we will find the experts for all our ideas (I also have a couple).
Thank you very much, it does!
I think you answer is worth to be published as a separate post. It will be relevant for everyone who is teaching.
It would be very interesting to look at the results of this experiment in more detail.
Yes, maybe I explained what I mean not very well; however, gjm (see commentaries below) seems to get it. The point is not that CFAR is very much like Lifespring (though I may have sounded like that), the point is that there are certain techniques (team spirit, deep emotional connections etc.) that are likely to be used in such workshops, that will most certainly make participants love workshop and organizers (and other participants) , but their effect on the participant’s life can be significantly weaker than their emotional change of mind. These techniques work sufficiently worse for the online workshops, so this was one of the reason I tried to understand why CFAR does not hold online workshops. Another reason was resentment towards CFAR for not doing it, for it would be much more convenient to me.
Is there any proven benefits of meditation retreats in comparison with regular meditation?
Ok, your point makes sense.
Basically, I am trying to figure out for myself if going to the workshop would be beneficial for me. I do believe that CFAR does not simply try to get as much money as possible. However, I am concerned that people after the workshop are strongly biased towards liking it not because it really helps, but because of psychological mechanisms akin to Lifespring. I am not saying that CFAR is doing it intentionally, it could just have been raised somehow on its own. Maybe these mechanisms are even beneficial to whatever CFAR is doing, but they definitely make evaluation harder.
“When I was talking to Valentine (head of curriculum design at the time) a while ago he said that the spirit is the most important thing about the workshop.”
Now, this already sounds a little bit disturbing and resembling Lifespring. Of course, the spirit is important, but I thought the workshop is going to arm us with instruments we can use in real life, not only in the emotional state of comradeship with like-minded rationalists.
I can understand your point, but I am not persuaded yet. Let me maybe clarify why. During the year and a half of COVID, the in-person workshops were not possible. During this time, there were people, who would strongly benefit from the workshop, and the workshop would be helpful at this time (for example, they were making a career choice). Some of them can allow private places for the time of the workshop. It seems that for them, during this time the online workshop would be certainly more beneficial than no workshop at all. Moreover, conducting at least one online workshop would be a good experiment that would give useful information. It is totally not obvious to me why the priors that “online workshop is useless or harmful, taking into account opportunity cost” are so high that this experiment should not be conducted.
Yes, I hope someone from CFAR can maybe explain it better to me.
It is a good justification for this behavior, but it does not seem to be the most rational choice. Indeed, one could specify that the participant of the online workshop must have a private space (own bedroom, office, hotel room, remote place in a park—whatever fits). I am pretty sure there is a significant number of people, who would prefer an online workshop to the offline one (especially when all offline are canceled due to COVID), and who have or can find a private space for the duration of the workshop. To say that we are not doing it because some people do not have privacy is like for the restaurant to stop offer meat to everyone because there are vegans among customers. Of course, online workshop is not for everyone, but there are people for whom it would work.
Thank you very much for catching the mistake! I checked, you are completely right.