Darwin still applies. Models (and memes within models) that work well and are popular are more likely to replicate via the companies that make those models gaining more resources and choosing to use similar mechanisms and data to train the next models.
Gradient descent all that are just extra steps.
chasmani
If the economy is so easily understood then why do we have high inflation, a cost of living crisis, rising inequality?
The thing that is not understood is why these things are happening and how we can change things so that normal people are better off.
The fact that some people have some coherent theories for some aspects of the economy is not equivalent to us understanding the economy.
I feel like the entire framing here is the problem. You cannot see “The Thing” because you are looking at it from a perspective where The Thing isn’t apparent.
What is The Thing? It is having a partnership that you are both committed to. At its best this partnership becomes an aspect of your self, and your partner. The frame to see this in is that the partnership is an entity in its own right and is a part of the “I” that each partner identifies with. In this frame the question “what am I getting out of this relationship” is no longer entirely focussed on the individual “I” but also on the partnership “I”. When you frame the question in terms of what the individual “I” gets out of it then you are entirely missing the point and are unable to see the real value proposition.
If we dip our toe back into the individualistic frame: you could describe the value of relationships being in the partial dissolution of self in the partnership, which not only feels amazing but also gives a deeper level of meaning to your life.
I would say that a part of compassion, and empathy, is to recognise that indeed those narratives are valid, or else there is some valid reason that people are as they are. Also, not everyone shares the moral value of optimising themselves or making themselves good at something. Disgust implies judgement that implies a lack of compassion.
Since you seem to be motivated at making yourself better, which I agree is a good motivation, why don’t you challenge yourself to increase your compassion and humility?
Do you have compassion for yourself? What are you bad at that you are unable to make yourself good at? Do you feel disgust for yourself in those situations? Compassion begins with humility, which is something that you might want to work on
I’m not sure I agree that is is easy for humans to robustly understand proofs. I think it takes really a lot of training to get humans to that point.
There’s the argument that increasing access to information creates competition for attention, which drives language to be more concise and readable, e.g. https://www.nature.com/articles/s44271-024-00117-1
In a post-scarcity world you probably want a lot of personal freedom.
Fun read. So, so many possible covariates. The causal web is very complicated here. Birth order affects lots and lots of other things, which can also affect the chance you become a cardinal. There are also lots of things that would affect the birth rate in a family and also affect the chance the children become cardinals.
I have a meta-view on this that you might think falls into the bucket of “feels intuitive based on the progress so far”. To counter that, this isn’t pure intuition. As a side note I don’t believe that intuitions should be dismissed and should be at least a part of our belief updating process.
I can’t tell you the fine details of what will happen and I’m suspicious of anyone who can because a) this is a very complex system b) no-one really knows how LLMs work, how human cognition works, or what is required for an intelligence takeoff.
However, I can say that for the last decade or so most predictions of AI progress have been on consistently longer timescales than what has happened. Things are happening quicker than the experts believe they will happen. Things are accelerating.
I also believe that there are many paths to AGI, and that given the amount of resources currently being put into the search for one of those paths, they will be found sooner rather than later.
The intelligence takeoff is already happening.
I agree with your point in general of efficiency vs rationality, but I don’t see the direct connection to the article. Can you explain? It seems to me that a representation along correlated values is more efficient, but I don’t see how it is any less rational.
I would describe this as a human-AI system. You are doing at least some of the cognitive work with the scaffolding you put in place through prompt engineering etc, which doesn’t generalise to novel types of problems.
You seem to make a strong assumption that consciousness emerges from matter. This is uncertain. The mind body problem is not solved.
It is so difficult to know whether this is genuine or if our collective imagination is being projected onto what an AI is.
If it was genuine, I might expect it to be more alien. But then what could it say that would be coherent (as it’s trained to be) and also be alien enough to convince me it’s genuine?
You said that you are not interested in exploring the meaning behind the green knight. I think that it’s very important. In particular, your translation to the Old West changes the challenge in important ways. I don’t claim to know the meaning behind the green knight. But I believe that there is something significant in the fact that the knights were so obsessed with courage and honour and the green knight laid a challenge at them that they couldn’t turn down given their code. Gawain stepped forward partly to protect Arthur. That changes the game. I asked ChatGPT to describe the differences, here are some parts of the answer:
Moral and Ethical Framework: “Sir Gawain and the Green Knight” operates within a chivalric code that values honor, bravery, and integrity. Gawain’s acceptance of the challenge is a testament to his adherence to these ideals. In contrast, the Old West scenario lacks a clear moral framework, presenting a more ambiguous ethical dilemma that revolves around survival and personal pride rather than chivalric honor.
Social and Cultural Context: “Sir Gawain and the Green Knight” is deeply embedded in medieval Arthurian literature, reflecting the societal values and ideals of the time. The Old West scenario reflects a different set of cultural values, emphasizing individualism and the ability to face death bravely.
And with a bit more prompting
If I were in a position similar to Sir Gawain, operating under the chivalric codes and values of the Arthurian legend, accepting the challenge could be seen as a necessary act to uphold honor and valor, integral to the identity of a knight. However, stepping out of the narrative and considering the challenge from a modern perspective, with contemporary ethical standards and personal values, my response would differ.
It’s useful in that it is a model that describes certain phenomena. I believe it is correct given the caveat that all models are approximations.
I did a physics undergraduate degree a long time ago. I can’t remember specifically but I’m sure the equation was derived and experimental evidence was explained. I have strong faith that matter converts to energy because it explains radiation, fission reactors and atomic weapons. I’ve seen videos of atomic bombs going off. I’ve seen evidence of radioactivity with my own eyes in a lab. I know of many technologies that rely on radioactivity to work—smoke alarms, Geiger counters, carbon dating, etc.
I have faith in the scientific process that many people have verified the equation and phenomena. If the equation was not correct then proving or showing that would be a huge piece of work that would make the career of a scientist that did that. I’m sure many have tried.
Overall the equation is a part of a whole network of beliefs. If the equation was incorrect then that would mean that my word model was very wrong in many uncorrelated ways. I find that unlikely.
Well I agree it is a strawman argument. Following the same lines as your argument, I would say the counter argument is that we don’t really care if a weak model is fully aligned or not. Is my calculator aligned? Is a random number generator aligned? Is my robotic vacuum cleaner aligned? It’s not really a sensical question.
Alignment is a bigger problem with stronger models. The required degree of alignment is much higher. So even if we accept your strawman argument it doesn’t matter.
I found this a useful framing. I’ve thought quite a lot about the offender versus defence dominance angle and to me it seems almost impossible that we can trust that defence will be dominant. As you said, defence has to be dominant in every single attack vector, both known and unknown vectors.
That is an important point because I hear some people argue that to protect against offensive AGI we need defensive AGI.
I’m tempted to combine the intelligence dominance and starting costs into a single dimensions, and then reframe the question in terms of “at what point would a dominant friendly AGI need to intervene to prevent a hostile AGI from killing everyone”. The pivotal act view is that you need to intervene before a hostile AGI even emerges. It might be that we can intervene slightly later, before a hostile AGI has enough resources to cause much harm but after we can tell if it is hostile or friendly.
Thank you for the great comments! I think I can sum up a lot of that as “the situation is way more complicated and high dimensional and life will find a way”. Yes I agree.
I think what I had in mind was an AI system that is supervising all other AIs (or AI components) and preventing them from undergoing natural selection. A kind of immune system. I don’t see any reason why that would be naturally selected for in the short-term in a way that also ensures human survival. So it would have to be built on purpose. In that model, the level of abstraction that would need to be copied faithfully would be the high-level goal to prevent runaway natural selection.
It would be difficult to build for all the reasons that you highlight. If there is an immunity/self-replicating arms race then you might ordinarily expect the self-replication to win because it only has to win once while the immune system has to win every time. But if the immune response had enough oversight and understanding of the system then it could potentially prevent the self-replication from ever getting started. I guess that comes down to whether a future AI can predict or control future innovations of itself indefinitely.
Love it! Agentic AI creates another transmission pathway: through the md files etc that tell agents how to use LLMs. These are perhaps quicker