That argument doesn’t work well.in its own terms: we have extinguished far fewer species than we have not.
TAG
So humans are “aligned” if humans have any kind of values? That’s not how alignment is usually used.
The Orthogonality Thesis asserts that there can exist arbitrarily intelligent agents pursuing any kind of goal.”
The Ortogonality Thesis is often used in a way that “smuggles in” the idea that an AI will necessarily have a stable goal, even though goals can be very variewd. But similar reasoning shows that any combination of goal (in)stability and goallessness is possible, as well. mindspace contains agents with fixed goals, randomnly drifting goals, corrigble (externally controlable goals) , as well as non-agentive minds with no goals.
We must always start with the simplest possible explanations for the phenomena that surround us.
Why?
The fewer components, abstractions, or entities required for a hypothesis, the better the hypothesis.
Why?
(Not doubting Occam’s razor, pointing out that it needs an explanation).
There is more than one way to correctly describe reality.
That goes against he law of non-contradiction: if the two ways are different, they cannot both be correct.
Newton’s theory was nominally refuted by Einstein’s relativism, but this did not stop it from working
“Working” means making correct predictions, not describing reality.
However, Stephen Hawking suggests instead that we consider them all true: that a theory accurately describes the fundamental nature of things is of less importance to us than that it gives us reliable mechanisms for interacting with reality.
How important something is depends on ones values.
“All models are wrong, but some of them are useful.”
...is the opposite of “There is more than one way to correctly describe reality.”. Unless you start changing the meanings of “works”/”useful” versus “true”/”describes reality”.
PS. Nothing to say about induction?
It’s not two things, risk versus safety, it’s three things: existential risk versus sub-existential risk versus no risk. Sub existential risk is the most likely on the priors.
That would be a philosophical problem...
Truth and Feelings can be reconciled, so long as you are not extreme about either.: if your true beliefs are hurtful you can keep them to yourself. Your worldview can be kept separate from your persona. The problem is when you bring a third thing—the thing known as sincerity or tactlessness, depending on whether or not you believe in it—into the picture. If you feel obliged to tell the truth, you are going to hurt feelings.
This used to be well known, but is becoming unknown because of an increasing tendency to use words like “truth and “honesty” in a way that encompasses offering unsolicited opinions in addition to avoid lying. If you can’t make a verbal distinction, its hard to make a conceptual one.
He visibly cared about other people being in touch with reality. “I’ve informed a number of male college students that they have large, clearly detectable body odors. In every single case so far, they say nobody has ever told them that before,” he wrote. (I can testify that this is true: while sharing a car ride with Anna Salamon in 2011, he told me I had B.O.)[21]
Well, that goes beyond having true beliefs and only making true statements.
“Read the sequences....just the sequences”
Something a better , future version of rationalism could do is build bridges and facilitate communication between these little bubbles. The answet-to-everything approach has been tried too many times.
Either the Tao can influence the world in the present, in which case the conditioners can never *really *prevent it from reasserting itself; or it can’t, in which case how did we first find it anyway; or it controlled the beginning as first cause in which case whatever happens anywhere ever is what it intended; or it intended something different but it’s not very good at it’s job.
Or it influences the world in proportion to how much it is recognised, and how much you influence the world is proportional to how much you recognise it. The Tao that controls you is not the Tao: the Tao you control is not the Tao. The Tao that does everything is not the Tao; the Tao that does nothing is not the Tao.
Footnotes three and four are the sources behind today today’s understanding of consciousness as including “any kind of cognition....” as well as “awareness”.
The wikipedia quote doesn’t to show that independence is necessary for consciousness, and your arguments from the behaviour of the LLM don’t to show that there is any awareness, or anything beyond forms of cognition.
I think of cognition as involving the process of reasoning.
The question is the relationship between cognition and consciousness, not reasoning. Your quotes show that, at best, cognition is necessary but insufficient for consciousness.
If I google “define Independent,” the first definition that comes up is “free from outside control; not depending on another’s authority.”
Independence in an absolute sense might be impossible: any deterministic system can be controlled if you know how it works , and you can set the initial conditions.
Right now, my computer is running programs, but that is based on programming from someone else’s cognition. The key here is that, if we dissect Chat-GPT4, I don’t believe you would find Python/Java/C++ or any known programming language that a programmer used in order to tell GPT4 how to solve the particular problems I gave it in the four sessions (from my original post and my own reply/addendum to my original post).
That seems to be the heart of the issue. No, its responses are not explictly programmed in. Yes, its reponses show the ability to learn and synthesise. Which means...minimally...that’s it actually is an AI …. not a glorified search engine. That’s what AI is supposed to do.
The question is whether there is a slope from
*Shows learning and synthesis in cognition *Has independent cognition *Is conscious. *(Has personhood?....should be a citizen...?)
If your think that learning and synthesis in cognition are sufficient for consciousness conscious, you are effectively assuming that all AIs are conscious. But, historically, Artificial Consciousness has been regarded as a much higher bar than artificial intelligence.
most definitions of consciousness indicate that—if a being has independent cognition (i.e. a stream of consciousness)--then the being is conscious.
I don’t think that’s true. For instance, none of the definitions given in the LW wiki give that definition. And the whole argument rests on that claim—which rests on the meaning of “independent”. What is “independent”, anyway?
I assume it isn’t always like a bell curve, because smaller and poorer societies can’t afford the deadweight of useless knowledge.
How do we use Bayes to find kinds of truth other than predictiveness?
And really the conditions of the OP are actively contrary to good decision-making, e.g. that you don’t know your particular conception of the good (??) or that you’re essentially self-interested. . .
Well, they’re inimical to good personal self-interested decision making, but why would that matter? Do you think justice and self interested rationality are the same? If they are differerent, what’s the problem? Rawl’s theory would not necessarily predict the behaviour of a self interested agent , but it’s not supposed to. It’s a normative theory: justly is how people should behave, not how they invariably do. If they have their own theories of ethics, well they are theories and not necessarily correct. Mere disagreement between the front-of-the-veil and behind-the-veil versions of a person doesn’t tell you much.
There’s no reason to think, generally, that people disagree with John Rawls only because of their social position or psychological quirks
They might have a well constructed case against him, he might have a well constructed case against them.
If it only takes five minutes, it is like buying a loaf of bread.
Does the world seem better to young people who are unable to afford housing?
Most knowledge is useless. Many people have heads filled with sport results and entertainment trivia. 50 years ago, people used to fix their own cars and make their own clothes.
There’s a powerful argument for smaller populations you didn’t mention at all: it would mean that there are more inelastic resources to go round. More land, so less of a housing crisis, fossil fuels that last longer. Note that while.high population worlds have them own advantage, in being able supply products that depend on economies of scale, those products are things advanced semi conductors, which are something of a luxury compared to land and energy
He doesn’t give The Answer. That’s one of the problems. I’ve read the sequences, and I don’t think his approach is that good. The other problem is that doing high-cost things at random, in the hope that they will pay off, is very inefficient.
If you have a meta belief that none of your beliefs are certain, does that make all your beliefs celiefs?