ChatGPT 5.4 Thinking is very strong. Is anyone else impressed by this thing? No one (ie, people in the streets, co-workers at non-tech job, people at bars) has yet said to me “AGI is here.” Am I missing something?
eye96458
I would caution against saying “parties aren’t real” for at least two reasons. First, it more-or-less invites definitional wars which are rarely productive. Second, when we think about explanatory and predictive theories, whether something is “real” (however you define it) is often irrelevant. What matters more is is the concept sufficiently clear / standardized / “objective” to measure something and thus serve as some replicable part of a theory.
I think this (implied) mode of reasoning can be pretty useful. For example:
Sally: A ghost just turned my television on again!
Tom: Ghosts aren’t real, so that’s not what happened.
But I’m like 75% sure that American political parties do exist (i.e., the correct ontology of the universe includes political parties alongside electrons, minds, and trees). I’d like to hear @Elizabeth’s argument against this.
Quantum mechanics is pretty well established, and we may suppose that it describes everything (at least, in low gravitational fields). Given that, pointing at a thing and saying “quantum mechanics!” adds no new information.
Are you making this argument?
P1: Quantum mechanics is well established.
P2: Quantum mechanics describes everything in low gravitational fields.
C1: So, calling a thing a “quantum system” doesn’t convey any information.
First of all, if everything is mathematically equivalent to an EU maximizer, then saying that something is an EU maximizer no longer represents meaningful knowledge, since it no longer distinguishes between fiction and reality.
I’m confused about your claim. For example, I can model (nearly?) everything with quantum mechanics, so then does calling something a quantum mechanical system not confer meaningful knowledge?
Except his name is George. He has a personality. He once had parents, maybe a school, maybe hopes and dreams. He is not detritus, but a person. Something terrible has gone wrong in his life, and we are of the opinion that it was his own fault. Karma. Just desserts.
I’m fascinated by the bolded claim. Are you asserting that there was a part of his life that was terrible AND that it, the terrible part, has gone wrong? Please clarify.
There is also this (incredibly well known?) website where (among other things) you can try to stay alive on a trip to Mars.
edit: And there is also No Vehicles in the Park.
Does the preference forming process count as thinking? If so, then I suspect that my desire to communicate that I am deep/unique/interesting to my peers is a major force in my preference for fringe and unpopular musical artists over Beyonce/Justin Bieber/Taylor Swift/etc. It’s not the only factor, but it is a significant one AFAICT.
And I’ve also noticed that if I’m in a social context and I’m considering whether or not to use a narcotic (eg, alcohol), then I’m extremely concerned about what the other people around me will think about me abstaining (eg, I may want to avoid communicating that I disapprove of narcotic use or that I’m not fun). In this case I’m just straight forwardly thinking about whether or not to take some action.
Are these examples of the sort of thing you are interested in? Or maybe I am misunderstanding what is meant by the terms “thinking” and “signalling”.
I think the way LLMs work might not be well described as having key internal gears or having an at-all illuminating python code sketch.
What motivates your believing that?
Would anyone like to have a conversation where we can intentionally practice pursuit of truth? (eg, ensure that we can pass eachother ITTs, avoid strawmanning, look for cruxes, etc)
I’m open to considering a wide range of propositions and questions, for example:
What speech, if any, should be prohibited in high schools?
Why don’t universities do more explicit rationality training?
Is death a harm?
Under what conditions are centrally planned economies better than market economies?
Is monarchy superior to democracy?
I’d define “genuine safety role” as “any qualified person will increase safety faster that capabilities in the role”. I put ~0 likelihood that OAI has such a position. The best you could hope for is being a marginal support for a safety-based coup (which has already been attempted, and failed).
“~0 likelihood” means that you are nearly certain that OAI does not have such a position (ie, your usage of “likelihood” has the same meaning as “degree of certainty” or “strength of belief”)? I’m being pedantic because I’m not a probability expert and AFAIK “likelihood” has some technical usage in probability.
If you’re up for answering more questions like this, then how likely do you believe it is that OAI has a position where at least 90% of people who are both, (A) qualified skill wise (eg, ML and interpretability expert), and, (B) believes that AIXR is a serious problem, would increase safety faster than capabilities in that position?
There’s a different question of “could a strategic person advance net safety by working at OpenAI, more so than any other option?”. I believe people like that exist, but they don’t need 80k to tell them about OpenAI.
This is a good point and you mentioning it updates me towards believing that you are more motivated by (1) finding out what’s true regarding AIXR and (2) reducing AIXR, than something like (3) shit talking OAI.
I asked a related question a few months ago, ie, if one becomes doom pilled while working as an executive at an AI lab and one strongly values survival, what should one do?
Can I request tabooing the phrase “genuine safety role” in favor of more detailed description of the work that’s done?
I suspect that would provide some value, but did you mean to respond to @Elizabeth?
I was just trying to use the term as a synonym for “actual safety role” as @Elizabeth used it in her original comment.
There’s broad disagreement about which kinds of research are (or should count as) “AI safety”, and what’s required for that to succeed.
This part of your comment seems accurate to me, but I’m not a domain expert.
Can you clarify what you mean by “completely unjustified”? For example, if OpenAI says “This role is a safety role.”, then in your opinion, what is the probability that the role is a genuine safety role?
I don’t think science is a good framework for non-scientific things. If you wrap spirituality in science, you kill whatever substance you had by reducing it to something mundane and mechanical.
I find it somewhat difficult to understand exactly what you mean here and in the rest of the comment. Could you maybe define the terms “science”, “spirituality” and “non-scientific things” as you are using them here?
What you seek is joy, fulfillment and wisdom, so why not aim at that directly? Using science to fix the problems that science caused feels a bit like putting out a fire using fire. Let me also warn you that meta-science is worse than science. The more degrees of separation to reality, the worse you’re off mentally.
Are you recommending here that people should not use science in their attempts to pursue joy, fulfillment and wisdom?
And when you say “The more degrees of separation to reality, …”, what is the thing that you are talking about that is being separated from reality?
This is most homeless! Most people who are homeless are not homeless long. The majority, the vast majority, are on the come up. Never forget it.
I hadn’t realized that was the case. Do you have any good data on this?
I think East Asian islands have a combination of 1 and 2. In Taiwan, the 30-40 year boom saw most people getting a piece of the pie. Few are desperate enough to resort to violent crimes. Does this seem reasonable?
It looks to me like here you are saying “Reducing the number of impoverished people causes a reduction in violent crime.” I believe this proposition is at least plausible. But isn’t it a quite different claim from “Reducing the amount of wealth disparity causes a reduction in violent crime.”?
Specifically, the number of impoverished people and the amount of wealth disparity are not the same thing (although empirically they may have some common relationship in the contemporary world). Consider two possible societies of 100 people:
(A) Each person has a net worth of $500.
(B) Half the people have a net worth of $75,000 and the other half have a net worth of $3,000,000.
Notice, (B) has more wealth disparity than (A), but it also has fewer impoverished people than (A). And I would expect (B) to have less violent crime than (A).
Does this seem correct to you?
Did Taylor have any techniques for trying to increase the number of Type 2 disagreements and decrease the number of Type 1 disagreements among his staff?
You should consider attending law school, I guess.
Sure, that’s one option, but requires a lot of time.
There’s a LARGE body of contract and debt-collection law and precedent, and relatedly, inheritance and probate law.
I have no doubt that this is true. Are you aware of a good short introduction?
It’s worth reading your credit card or mortgage agreement to get a sense of it.
I agree with you, but I’ve already done this.
Sorry that I wasn’t clear.
I want to know which laws and judicial precedents are most relevant to the situation that you are describing.
Again, this is a general point. One can bring in additional details to support the claim that the existing outcome is optimal or to support the claim that it is not optimal. But that was the point of my comment. We cannot just start with market outcome and claim success.
You’ve convinced me that my initial comment was mistaken in another way. Specifically, if I haven’t specified an objective (eg, less than 150 incidents of people shitting in San Francisco streets each year, or, every point in San Francisco is within .25 miles of at least 4 free to use bathrooms), then it is meaningless to suggest that it is currently being satisfied. So, insofar that I suggested that an objective involving bathrooms was likely being satisfied (specifically I suggested that we don’t need more bathrooms, but relative to what objective?) without actually specifying that objective, my comment was meaningless.
(Maybe I made this mistake because in my thinking I failed to distinguish between the market equilibrium and objectives.)
If the lens of public goods is not helpful then perhaps look at positive externalities. The two are fairly closely related with regard to the question you’re asking about. Tyler Cowan’s blurb (scroll down a littel) on Public Goods and Externalities notes how markets will under produce goods with positive external effects.
Thanks for the link. Is it the case that people not shitting in the street is a positive externality?
And when you say “under produce” do you mean relative to the market equilibrium for bathrooms or some objective involving bathrooms?
Is AGI here yet?