Unfortunately if OpenAI the company is destroyed, all that happens is that all of its employees get hired by Microsoft, they change the lettering on the office building, and sama’s title changes from CEO to whatever high level manager positions he’ll occupy within microsoft.
Razied
Hmm, but here the set of possible world states would be the domain of the function we’re optimising, not the function itself. Like, No-Free-Lunch states (from wikipedia):
Theorem 1: Given a finite set and a finite set of real numbers, assume that is chosen at random according to uniform distribution on the set of all possible functions from to . For the problem of optimizing over the set , then no algorithm performs better than blind search.
Here is the set of possible world arrangements, which is admittedly much smaller than all possible data structures, but the theorem still holds because we’re averaging over all possible value functions on this set of worlds, a set which is not physically restricted by anything.I’d be very interested if you can find Byrnes’ writeup.
Obviously LLMs memorize some things, the easy example is that the pretraining dataset of GPT-4 probably contained lots of cryptographically hashed strings which are impossible to infer from the overall patterns of language. Predicting those accurately absolutely requires memorization, there’s literally no other way unless the LLM solves an NP-hard problem. Then there are in-between things like Barack Obama’s age, which might be possible to infer from other language (a president is probably not 10 yrs old or 230), but within the plausible range, you also just need to memorize it.
There is no optimization pressure from “evolution” at all. Evolution isn’t tending toward anything. Thinking otherwise is an illusion.
Can you think of any physical process at all where you’d say that there is in fact optimization pressure? Of course at the base layer it’s all just quantum fields changing under unitary evolution with a given Hamiltonian, but you can still identify subparts of the system that are isomorphic with a process we’d call “optimization”. Evolution doesn’t have a single time-independent objective it’s optimizing, but it does seem to me that it’s basically doing optimization on a slowly time-changing objective.
Why would you want to take such a child and force them to ‘emotionally develop’ with dumber children their own age?
Because you primarily make friends in school with people in your grade, and if you skip too many grades, the physical difference between the gifted kid and other kids will prevent them from building a social circle based on physical play, and probably make any sort of dating much harder.
Predicting the ratio at t=20s is hopeless. The only sort of thing you can predict is the variance in the ratio over time, like the ratio as a function of time is , where . Here the large number of atoms lets you predict , but the exact number after 20 seconds is chaotic. To get an exact answer for how much initial perturbation still leads to a predictable state, you’d need to compute the lyapunov exponents of an interacting classical gas system, and I haven’t been able to find a paper that does this within 2 min of searching. (Note that if the atoms are non-interacting the problem stops being chaotic, of course, since they’re just bouncing around on the walls of the box)
I’ll try to say the point some other way: you define “goal-complete” in the following way:
By way of definition: An AI whose input is an arbitrary goal, which outputs actions to effectively steer the future toward that goal, is goal-complete.
Suppose you give me a specification of a goal as a function from a state space to a binary output. Is the AI which just tries out uniformly random actions in perpetuity until it hits one of the goal states “goal-complete”? After all, no matter the goal specification this AI will eventually hit it, though it might take a very long time.
I think the interesting thing you’re trying to point at is contained in what it means to “effectively” steer the future, not in goal-arbitrariness.
E.g. I claim humans are goal-complete General Intelligences because you can give us any goal-specification and we’ll very often be able to steer the future closer toward it.
If you’re thinking of “goals” as easily specified natural-language things, then I agree with you, but the point is that turing-completeness is a rigorously defined concept, and if you want to get the same level of rigour for “goal-completeness”, then most goals will be of the form “atom 1 is a location x, atom 2 is at location y, …” for all atoms in the universe. And when averaged across all such goals, literally just acting randomly performs as well as a human or a monkey trying their best to achieve the goal.
Goal-completeness doesn’t make much sense as a rigorous concept because of No-Free-Lunch theorems in optimisation. A goal is essentially a specification of a function to optimise, and all optimisation algorithms perform equally well (or rather poorly) when averaged across all functions.
There is no system that can take in an arbitrary goal specification (which is, say, a subset of the state space of the universe) and achieve that goal on average better than any other such system. My stupid random action generator is equally as bad as the superintelligence when averaged across all goals. Most goals are incredibly noisy, the ones that we care about form a tiny subset of the space of all goals, and any progress in AI we make is really about biasing our models to be good on the goals we care about.
Zvi, you continue to be literally the best news aggregator on the planet for the stuff that I actually care about. Really, thanks a lot for doing this, it’s incredibly valuable to me.
Wouldn’t lowering igf-1 also lead to really shity quality of life from lower muscle mass and much longer recovery times from injury?
The proteins themselves are primarily covalent, but a quick google search says that the forces in the lipid layer surrounding cells are primarily non-covalent, and the forces between cells seem also non-covalent. Aren’t those forces the ones we should be worrying about?
It seems like Eliezer is saying “the human body is a sand-castle, what if we made it a pure crystal block?”, and you’re responding with “but individual grains of sand are very strong!”
But perhaps the bigger reason is that I find SIA intuitively extremely obvious. It’s just what you get when you apply Bayesian reasoning to the fact that you exist.
Correct, except for the fact that you’re failing to consider the possibility that you might not exist at all...
My entire uncertainty in anthropic reasoning is bound up in the degree to which an “observer” is at all a coherent concept.
And my guess is that is how Hamas see and bill themselves.
And your guess would be completely, hopelessly wrong. There is an actual document called “The Covenant of Hamas” written in 1988 and updated in 2017, which you can read here, it starts with
Praise be to Allah, the Lord of all worlds. May the peace and blessings of Allah be upon Muhammad, the Master of Messengers and the Leader of the mujahidin, and upon his household and all his companions.
… so, uh, not a good start for the “not religious” thing. It continues:
1. The Islamic Resistance Movement “Hamas” is a Palestinian Islamic national liberation and resistance movement. Its goal is to liberate Palestine and confront the Zionist project. Its frame of reference is Islam, which determines its principles, objectives and means.
In the document they really seem to want to clarify at every opportunity that yes, indeed they are religious at the most basic level, and that religion impacts every single aspect of their decision-making. I strongly recommend that everyone here read the whole thing, just to see what it really means to take your religion seriously.
The 2017 version has been cleaned up, but in the 1988 covenant you also had this gem:
> The Day of Judgment will not come about until Moslems fight Jews and kill them. Then, the Jews will hide behind rocks and trees, and the rocks and trees will cry out: ‘O Moslem, there is a Jew hiding behind me, come and kill him.’ (Article 7)
>The HAMAS regards itself the spearhead and the vanguard of the circle of struggle against World Zionism… Islamic groups all over the Arab world should also do the same, since they are best equipped for their future role in the fight against the warmongering Jews.′
It is important that Gazans won’t feel like their culture is being erased.
A new education curriculum is developed which fuses western education, progressive values and Muslim tradition while discouraging political violence.
These two things are incompatible. Their culture is the entire problem. To get a sense of the sheer vastness of the gap, consider the fact that Arabs read on average 6 pages per year. It would take a superintelligence to somehow convince the palestinians to embrace western thought and values while not feeling like their culture is being erased.
Oh, true! I was going to reply that since probability is just a function of a physical system, and the physical system is continuous, then probability is continuous… but if you change an integer variable in C from 35 to 5343 or whatever, there’s no real sense in which the variable goes through all intermediate values, even if the laws of physics are continuous.
If he’s ever attended an event which started out with less than a 28% chance of orgy, which then went on to have an orgy, then that statement is false by the Intermediate Value Theorem, since there would have been an instant in time where the probability of the event crossed 28%.
The most basic rationalist precept is to not forcibly impose your values onto another mind.
What? How does that make any sense at all? The most basic precept of rationality is to take actions which achieve future world states that rank highly under your preference ordering. Being less wrong, more right, being bayesian, saving the world, not imposing your values on others, etc. are all deductions that follow from that most basic principle: Act and Think Such That You Win.
Wait, do lesswrongers not know about semaglutide and tirzepatide yet? Why would anyone do something as extreme as bariatric surgery when tirzepatide patients lose pretty much the same amount of weight after a year as with the surgery?
Unfortunately the entire complexity has just been pushed one level down into the definition of “simple”. The L2 norm can’t really be what we mean by simple, because simply scaling the weights in a layer by A, and the weights in the next layer by 1/A leaves the output of the network invariant, assuming ReLU activations, yet you can obtain arbitrarily high L2 norms by just choosing A high enough.