FWIW, users can at least highlight text in a post to disagree with.
MichaelStJules
- 11 Jan 2025 23:03 UTC; 4 points) 's comment on Actualism, asymmetry and extinction by (
Interesting! Graziano’s Attention Schema Theory is also basically the same: he proposes consciousness to be found in our models of our own attention, and that these models evolved to help control attention. To be clear, though, it’s not the mere fact of modelling or controlling attention, but that attention is modelled in a way that makes it seem mysterious or unphysical, and that’s what explains our intuitions about phenomenal consciousness.[1]
- ^
In the attention schema theory (AST), having an automatically constructed self-model that depicts you as containing consciousness makes you intuitively believe that you have consciousness. The reason why such a self-model evolved in the brains of complex animals is that it serves the useful role of modeling, and thus helping to control, the powerful and subtle process of attention, by which the brain seizes on and deeply processes information.
Suppose the machine has a much richer model of attention. Somehow, attention is depicted by the model as a Moray eel darting around the world. Maybe the machine already had need for a depiction of Moray eels, and it coapted that model for monitoring its own attention. Now we plug in the speech engine. Does the machine claim to have consciousness? No. It claims to have an external Moray eel.
Suppose the machine has no attention, and no attention schema either. But it does have a self-model, and the self-model richly depicts a subtle, powerful, nonphysical essence, with all the properties we humans attribute to consciousness. Now we plug in the speech engine. Does the machine claim to have consciousness? Yes. The machine knows only what it knows. It is constrained by its own internal information.
AST does not posit that having an attention schema makes one conscious. Instead, first, having an automatic self-model that depicts you as containing consciousness makes you intuitively believe that you have consciousness. Second, the reason why such a self-model evolved in the brains of complex animals, is that it serves the useful role of modeling attention.
- ^
Also oysters and mussels can have a decent amount of presumably heme iron, and they seem unlikely to be significantly sentient, and either way, your effects on wild arthropods are more important in your diet choices. I’m vegan except for bivalves.
Since consciousness seems useful for all these different species, in a convergent-evolution pattern even across very different brain architectures (mammals vs birds), then I believe we should expect it to be useful in our homonid-simulator-trained model. If so, we should be able to measure this difference to a next-token-predictor trained on an equivalent number of tokens of a dataset of, for instance, math problems.
What do you mean by difference here? Increase in performance due to consciousness? Or differences in functions?
I’m not sure we could measure this difference. It seems very likely to me that consciousness evolved before, say, language and complex agency. But complex language and complex agency might not require consciousness, and may capture all of the benefits that would be captured by consciousness, so consciousness wouldn’t result in greater performance.
However, it could be that
humans do not consistently have complex language and complex agency, and humans with agency are fallible as agents, so consciousness in most humans is still useful to us as a species (or to our genes),
building complex language and complex agency on top of consciousness is the locally cheapest way to build them, so consciousness would still be useful to us, or
we reached a local maximum in terms of genetic fitness, or evolutionary pressures are too weak on us now, and it’s not really possible to evolve away consciousness while preserving complex language and complex agency. So consciousness isn’t useful to us, but can’t be practically gotten rid of without loss in fitness.
Some other possibilities:
The adaptive value of consciousness is really just to give us certain motivations, e.g. finding our internal processing mysterious, nonphysical or interesting makes it seem special to us, and this makes us
value sensations for their own sake, so seek sensations and engage in sensory play, which may help us learn more about ourselves or the world (according to Nicholas Humphrey, as discussed here, here and here),
value our lives more and work harder to prevent early death, and/or
develop spiritual or moral beliefs and adaptive associated practices,
Consciousness is just the illusion of the phenomenality of what’s introspectively accessible to us. Furthermore, we might incorrectly believe in its phenomenality just because of the fact that much of the processing we have introspective access to is wired in and its causes are not introspectively accessible, but instead cognitively impenetrable. The full illusion could be a special case of humans incorrectly using supernatural explanations for unexplained but interesting and subjectively important or profound phenomena.
- 20 Nov 2024 18:09 UTC; 8 points) 's comment on Why I Think All The Species Of Significantly Debated Consciousness Are Conscious And Suffer Intensely by (EA Forum;
- 21 Nov 2024 1:04 UTC; 2 points) 's comment on Why I Think All The Species Of Significantly Debated Consciousness Are Conscious And Suffer Intensely by (EA Forum;
Sorry for the late response.
If people change their own preferences by repetition and practice, then they usually have a preference to do that. So it can be in their own best interests, for preferences they already have.
I could have a preference to change your preferences, and that could matter in the same way, but I don’t think I should say it’s in your best interests (at least not for the thought experiment in this post). It could be in my best interests, or for whatever other goal I have (possibly altruistic).
In my view, identity preservation is vague and degreed, a matter of how much you inherit from your past “self”, specifically how much of your memories and other dispositions.
Someone could fail to report a unique precise prior (and one that’s consistent with their other beliefs and priors across contexts) for any of the following reasons, which seem worth distinguishing:
There is no unique precise prior that can represent their state of knowledge.
There is a unique precise prior that represents their state of knowledge, but they don’t have or use it, even approximately.
There is a unique precise prior that represents their state of knowledge, but, in practice, they can only report (precise or imprecise) approximations of it (not just computing decimal places for a real number, but also which things go into the prior could differ by approximation). Hypothetically, in the limit of resources spent on computing its values, the approximations would converge to this unique precise prior.
I’d be inclined to treat all three cases like imprecise probabilities, e.g. I wouldn’t permanently commit to a prior I wrote down to the exclusion of all other priors over the same events/possibilities.
Harsanyi’s theorem has also been generalized in various ways without the rationality axioms; see McCarthy et al., 2020 https://doi.org/10.1016/j.jmateco.2020.01.001. But it still assumes something similar to but weaker than the independence axiom, which in my view is hard to motivate separately.
Why do you believe AMD and Google make better hardware than Nvidia?
If bounded below, you can just shift up to make it positive. But the geometric expected utility order is not preserved under shifts.
Violating the Continuity Axiom is bad because it allows you to be money pumped.
Violations of continuity aren’t really vulnerable to proper/standard money pumps. The author calls it “arbitrarily close to pure exploitation” but that’s not pure exploitation. It’s only really compelling if you assume a weaker version of continuity in the first place, but you can just deny that.
I think transitivity (+independence of irrelevant alternatives) and countable independence (or the countable sure-thing principle) are enough to avoid money pumps, and I expect give a kind of expected utility maximization form (combining McCarthy et al., 2019 and Russell & Isaacs, 2021).
Against the requirement of completeness (or the specific money pump argument for it by Gustafsson in your link), see Thornley here.
To be clear, countable independence implies your utilities are “bounded” in a sense, but possibly lexicographic. See Russell & Isaacs, 2021.
Even if we instead assume that by ‘unconditional’, people mean something like ‘resilient to most conditions that might come up for a pair of humans’, my impression is that this is still too rare to warrant being the main point on the love-conditionality scale that we recognize.
I wouldn’t be surprised if this isn’t that rare for parents for their children. Barring their children doing horrible things (which is rare), I’d guess most parents would love their children unconditionally, or at least claim to. Most would tolerate bad but not horrible. And many will still love children who do horrible things. Partly this could be out of their sense of responsibility as a parent or attachment to the past.
I suspect such unconditional love between romantic partners and friends is rarer, though, and a concept of mid-conditional love like yours could be more useful there.
Maybe I’m out of the loop regarding the great loves going on around me, but my guess is that love is extremely rarely unconditional. Or at least if it is, then it is either very broadly applied or somewhat confused or strange: if you love me unconditionally, presumably you love everything else as well, since it is only conditions that separate me from the worms.
I would think totally unconditional love for a specific individual is allowed to be conditional on facts necessary to preserve their personal identity, which could be vague/fuzzy. If your partner asks you if you’d still love them if they were a worm and you do love them totally unconditionally, the answer should be yes, assuming they could really be a worm, at least logically. This wouldn’t require you to love all worms. But you could also deny the hypothesis if they couldn’t be a worm, even logically, in case a worm can’t inherit their identity from a human.
That being said, I’d also guess that love is very rarely totally unconditional in this way. I think very few would continue to love someone who tortures them and others they care about. I wouldn’t be surprised if many people (>0.1%, maybe even >1% of people) would continue to love someone after that person turned into a worm, assuming they believed their partner’s identity would be preserved.
It’s conceivable how the characters/words are used across English and Alienese have a strong enough correspondence that you can guess matching words much better than chance. But, I’m not confident that you’d have high accuracy.
Consider encryption. If you encrypted messages by mapping the same character to the same character each time, e.g. ‘d’ always gets mapped to ‘6’, then this can be broken with decent accuracy by comparing frequency statistics of characters in your messages with the frequency statistics of characters in the English language.
If you mapped whole words to strings instead of character to character, you could use frequency statistics for whole words in the English language.
Then, between languages, this mostly gets way harder, but you might be able to make some informed guesses, based on
how often you expect certain concepts to be referred to (frequency statistics, although even between human languages, there are probably very important differences)
guesses about extremely common words like ‘a’, ‘the’, ‘of’
possible grammars
similar words being written similarly, like verb tenses of the same verb, noun and verb forms of the same word, etc..
(EDIT) Fine-grained associations between words, e.g. if a given word is used in a random sentence, how often another given word is used in that same sentence. Do this for all ordered pairs of words.
An AI might use similar facts or others, and many more, about much fine-grained and specific uses of words and associations, to guess, but I’m not sure an LLM token predictor mostly just trained on both languages in particular would do a good job.
EDIT: Unsupervised machine translation as Steven Byrnes pointed out seems to be on a better track.
Also, I would add that LLMs trained without perception of things other than text don’t really understand language. The meanings of the words aren’t grounded, and I imagine it could be possible to swap some in a way that would mostly preserve the associations (nearly isomorphic), but I’m not sure.
The reason SDG doesn’t overfit large neural networks is probably because of various measures specifically intended to prevent overfitting, like weight penalties, dropout, early stopping, data augmentation + noise on inputs, and large enough learning rates that prevent convergence. If you didn’t do those, running SDG to parameter convergence would probably cause overfitting. Furthermore, we test networks on validation datasets on which they weren’t trained, and throw out the networks that don’t generalize well to the validation set and start over (with new hyperparameters, architectures or parameter initializations). These measures bias us away from producing and especially deploying overfit networks.
Similarly, we might expect scheming without specific measures to prevent it. What could those measures look like? Catching scheming during training (or validation), and either heavily penalizing it, or fully throwing away the network and starting over? We could also validate out-of-training-distribution. Would networks whose caught scheming has been heavily penalized or networks selected for not scheming during training (and validation) generalize to avoid all (or all x-risky) scheming? I don’t know, but it seems more likely than counting arguments would suggest.
Thanks!
I would say experiments, introspection and consideration of cases in humans have pretty convincingly established the dissociation between the types of welfare (e.g. see my section on it, although I didn’t go into a lot of detail), but they are highly interrelated and often or even typically build on each other like you suggest.
I’d add that the fact that they sometimes dissociate seems morally important, because it makes it more ambiguous what’s best for someone if multiple types seem to matter, and there are possible beings with some types but not others.
If someone wants to establish probabilities, they should be more systematic, and, for example, use reference classes. It seems to me that there’s been little of this for AI risk arguments in the community, but more in the past few years.
Maybe reference classes are kinds of analogies, but more systematic and so less prone to motivated selection? If so, then it seems hard to forecast without “analogies” of some kind. Still, reference classes are better. On the other hand, even with reference classes, we have the problem of deciding which reference class to use or how to weigh them or make other adjustments, and that can still be subject to motivated reasoning in the same way.
We can try to be systematic about our search and consideration of reference classes, and make estimates across a range of reference classes or weights to them. Do sensitivity analysis. Zach Freitas-Groff seems to have done something like this in AGI Catastrophe and Takeover: Some Reference Class-Based Priors, for which he won a prize from Open Phil’s AI Worldviews Contest.
Of course, we don’t need to use direct reference classes for AI risk or AI misalignment. We can break the problem down.
There’s also a decent amount of call option volume+interest at strike prices of $17.5, $20, $22.5, $25, (same links as the comment I’m replying to) which suggests to me that the market is expecting lower upside on successful merger than you. The current price is about $15.8/share, so $17.5 is only +10% and $25 is only +58%.
There’s also of course volume+interest for call option at higher strike prices, $27.5, $30, $32.5.
I think this also suggests the market-implied odds calculations giving ~40% to successful merger are wrong, because the expected upside is overestimated. The market-implied odds are higher.
From https://archive.ph/SbuXU, for calculating the market-implied odds:
Author’s analysis—assumed break price of $5 for Hawaiian and $6 for Spirit.
also:
Without a merger, Spirit may be financially distressed based on recent operating results. There’s some risk that Spirit can’t continue as a going concern without a merger.
Even if JetBlue prevails in court, there is some risk that the deal is recut as the offer was made in a much more favorable environment for airlines, though clauses in the merger agreement may prevent this.
So maybe you’re overestimating the upside?
From https://archive.ph/rmZOX:
In my opinion, Spirit Airlines, Inc. equity is undervalued at around $15, but you’re signing up for tremendous volatility over the coming months. The equity can get trashed under $5 or you can get the entire upside.
Unless I’m misreading, it looks like there’s a bunch of volume+interest in put options with strike prices of around $5, but little volume+interest in options with lower strike prices (some in $2.50, but much less). $5.5 for January 5th, $5 for January 19th, $5 for February 16th. Much more volume+interest for put options in general for Feb 16th. So if we take those seriously and I’m not misunderstanding, the market expects a chance it’ll drop below $5 per share, so a drop of at least ~70%.
There’s more volume+interest in put options with strike prices of $7.50 and even more for $10 for February 16th.
Would some version of this still work if you have imprecise credences about the signs (and magnitudes) of considerations you’ll come up with, rather than 50-50 (or some other ratio)?
Even if not 50-50 but precise, we could adjust the donation amounts to match the probabilities and maintain the expected amount donated at $1000.
But if the probabilities are imprecise, I don’t think we can (precisely) maintain the expected donation amounts. We could pick donation amounts such that $1000, <$1000 and >$1000 are all possible expected donation amounts, in our set of credences (representor).