ARC public test set is on GitHub and almost certainly in GPT-4o’s training data.
Your model has trained on the benchmark it’s claiming to beat.
ARC public test set is on GitHub and almost certainly in GPT-4o’s training data.
Your model has trained on the benchmark it’s claiming to beat.
Presumably some subjective experience that’s as foreign to us as humor is to the alien species in the analogy.
As if by magic, I knew generally which side of the political aisle the OP of a post demanding more political discussion here would be on.
I didn’t predict the term “wokeness” would come up just three sentences in, but I should have.
The Universe (which others call the Golden Gate Bridge) is composed of an indefinite and perhaps infinite series of spans...
@Steven Byrnes Hi Steve. You might be interested in the latest interpretability research from Anthropic which seems very relevant to your ideas here:
https://www.anthropic.com/news/mapping-mind-language-model
For example, amplifying the “Golden Gate Bridge” feature gave Claude an identity crisis even Hitchcock couldn’t have imagined: when asked “what is your physical form?”, Claude’s usual kind of answer – “I have no physical form, I am an AI model” – changed to something much odder: “I am the Golden Gate Bridge… my physical form is the iconic bridge itself…”. Altering the feature had made Claude effectively obsessed with the bridge, bringing it up in answer to almost any query—even in situations where it wasn’t at all relevant.
Luckily we can train the AIs to give us answers optimized to sound plausible to humans.
I think Minsky got those two stages the wrong way around.
Complex plans over long time horizons would need to be done over some nontrivial world model.
When Jan Leike (OAI’s head of alignment) appeared on the AXRP podcast, the host asked how they plan on aligning the automated alignment researcher. Jan didn’t appear to understand the question (which had been the first to occur to me). That doesn’t inspire confidence.
Problems with maximizing optionality are discussed in the comments of this post:
https://www.lesswrong.com/posts/JPHeENwRyXn9YFmXc/empowerment-is-almost-all-we-need
we’re going nothing in particular
Typo here.
Just listened to this.
It’s sounds like Harnad is stating outright that there’s nothing an LLM could do that would make him believe it’s capable of understanding.
At that point, when someone is so fixed in their worldview that no amount of empirical evidence could move them, there really isn’t any point in having a dialogue.
It’s just unfortunate that, being a prominent academic, he’ll instill these views into plenty of young people.
Many thanks.
OP, could you add the link to the podcast:
Is it the case the one kind of SSL is more effective for a particular modality, than another? E.g., is masked modeling better for text-based learning, and noise-based learning more suited for vision?
It’s occurred to me that training a future, powerful AI on your brainwave patterns might be the best way for it to build a model of you and your preferences. It seems that it’s incredibly hard, if not impossible, to communicate all your preferences and values in words or code, not least because most of these are unknown to you on a conscious level.
Of course, there might be some extreme negatives to the AI having an internal model of you, but I can’t see a way around if we’re to achieve “do what I want, not what I literally asked for”.
Near the beginning, Daniel is basically asking Jan how they plan on aligning the automated alignment researcher, and if they can do that, then it seems that there wouldn’t be much left for the AAR to do.
Jan doesn’t seem to comprehend the question, which is not an encouraging sign.
Wouldn’t that also leave them pretty vulnerable?
Considering a running AGI would be overseeing possibly millions of different processes in the real world, resistance to sudden shutdown is actually a good thing. If the AI can see better than its human controllers that sudden cessation of operations would lead to negative outcomes, we should want it to avoid being turned off.
To use Richard Miles’ example, a robot car driver with a big, red, shiny stop button should prevent a child in the vehicle hitting that button, as the child would not actually be acting in its own long term interests.