CS PhD student
Abhimanyu Pallavi Sudhir
Meaningful things are those the universe possesses a semantics for
Betting on what is un-falsifiable and un-verifiable
[Question] Political Roko’s basilisk
Disagree. Daria considers the colour of the sky an important issue because it is socially important, not because it is of actual cognitive importance. Ferris recognizes that it doesn’t truly change much about his beliefs, since their society doesn’t have any actual scientific theories predicting the colour of the sky (if they did, the alliances would not be on uncorrelated issues like taxes and marriage), and bothers with things he finds to be genuinely more important.
[Question] Godel in second-order logic?
[Question] A way to beat superrational/EDT agents?
current LLMs vs dangerous AIs
Most current “alignment research” with LLMs seems indistinguishable from “capabilities research”. Both are just “getting the AI to be better at what we want it to do”, and there isn’t really a critical difference between the two.
Alignment in the original sense was defined oppositionally to the AI’s own nefarious objectives. Which LLMs don’t have, so alignment research with LLMs is probably moot.
something related I wrote in my MATS application:
-
I think the most important alignment failure modes occur when deploying an LLM as part of an agent (i.e. a program that autonomously runs a limited-context chain of thought from LLM predictions, maintains a long-term storage, calls functions such as search over storage, self-prompting and habit modification either based on LLM-generated function calls or as cron-jobs/hooks).
-
These kinds of alignment failures are (1) only truly serious when the agent is somehow objective-driven or equivalently has feelings, which current LLMs have not been trained to be (I think that would need some kind of online learning, or learning to self-modify) (2) can only be solved when the agent is objective-driven.
-
[Question] Utility functions without a maximum
it’s extremely high immediate value—it solves IP rights entirely.
It’s the barbed wire for IP rights
That’s syntax, not semantics.
- [deleted]
Oh right, lol, good point.
quick thoughts on LLM psychology
LLMs cannot be directly anthromorphized. Though something like “a program that continuously calls an LLM to generate a rolling chain of thought, dumps memory into a relational database, can call from a library of functions which includes dumping to recall from that database, receives inputs that are added to the LLM context” is much more agent-like.
Humans evolved feelings as signals of cost and benefit — because we can respond to those signals in our behaviour.
These feelings add up to a “utility function”, something that is only instrumentally useful to the training process. I.e. you can think of a utility function as itself a heuristic taught by the reward function.
LLMs certainly do need cost-benefit signals about features of text. But I think their feelings/utility functions are limited to just that.
E.g. LLMs do not experience the feeling of “mental effort”. They do not find some questions harder than others, because the energy cost of cognition is not a useful signal to them during the training process (I don’t think regularization counts for this either).
LLMs also do not experience “annoyance”. They don’t have the ability to ignore or obliterate a user they’re annoyed with, so annoyance is not a useful signal to them.
Ok, but aren’t LLMs capable of simulating annoyance? E.g. if annoying questions are followed by annoyed responses in the dataset, couldn’t LLMs learn to experience some model of annoyance so as to correctly reproduce the verbal effects of annoyance in its response?
More precisely, if you just gave an LLM the function
ignore_user()
in its function library, it would run it when “simulating annoyance” even though ignoring the user wasn’t useful during training, because it’s playing the role.I don’t think this is the same as being annoyed, though. For people, simulating an emotion and feeling it are often similar due to mirror neurons or whatever, but there is no reason to expect this is the case for LLMs.
conditionalization is not the probabilistic version of implies
P Q Q| P P → Q T T T T T F F F F T N/A T F F N/A T Resolution logic for conditionalization:
Q if P or True
Resolution logic for implies:
Q if P or None
One can absolutely construct a utility function for the robot. It’s a “shooting-blue maximizer”. Just because the appearing utility function is wrong doesn’t mean there isn’t a utility function.
I’m not sure your interpretation of logical positivism is what the positivists actually say. They don’t argue against having a mental model that is metaphysical, they point out that this mental model is simply a “gauge”, and that anything physical is invariant under changes of this gauge.
The “Dutch books” example is not restricted to improper priors. I don’t have time to transform this into the language of your problem, but the basically similar two-envelopes problem can arise from the prior distribution:
f(x) = 1/4*(3/4)^n where x = 2^n (n >=0), 0 if x cannot be written in this form
Considering this as a prior on the amount of money in an envelope, the expectation of the envelope you didn’t choose is always 8⁄7 of the envelope you did choose.
There is no actual mathematical contradiction with this sort of thing—with prior or improper priors, thanks to the timely appearance of infinities. See here for an explanation:
https://thewindingnumber.blogspot.com/2019/12/two-envelopes-problem-beyond-bayes.html
I used to have an idea for a karma/reputation system: repeatedly recalculate karma weighted by the karma of the upvoters and downvoters on a comment (then normalize to avoid hyperinflation) until a fixed point is reached.
I feel like this is vaguely somehow related to:
AlphaGoZero
Wealth in markets
I think that the philosophical questions you’re describing actually evaporate and turn out to be meaningless once you think enough about them, because they have a very anthropic flavour.
This seems to be relevant to calculations of climate change externalities, where the research is almost always based on the direct costs of climate change if no one modified their behaviour, rather than the cost of building a sea wall, or planting trees.