we’re going nothing in particular
Typo here.
we’re going nothing in particular
Typo here.
Just listened to this.
It’s sounds like Harnad is stating outright that there’s nothing an LLM could do that would make him believe it’s capable of understanding.
At that point, when someone is so fixed in their worldview that no amount of empirical evidence could move them, there really isn’t any point in having a dialogue.
It’s just unfortunate that, being a prominent academic, he’ll instill these views into plenty of young people.
Many thanks.
OP, could you add the link to the podcast:
Is it the case the one kind of SSL is more effective for a particular modality, than another? E.g., is masked modeling better for text-based learning, and noise-based learning more suited for vision?
It’s occurred to me that training a future, powerful AI on your brainwave patterns might be the best way for it to build a model of you and your preferences. It seems that it’s incredibly hard, if not impossible, to communicate all your preferences and values in words or code, not least because most of these are unknown to you on a conscious level.
Of course, there might be some extreme negatives to the AI having an internal model of you, but I can’t see a way around if we’re to achieve “do what I want, not what I literally asked for”.
Near the beginning, Daniel is basically asking Jan how they plan on aligning the automated alignment researcher, and if they can do that, then it seems that there wouldn’t be much left for the AAR to do.
Jan doesn’t seem to comprehend the question, which is not an encouraging sign.
Wouldn’t that also leave them pretty vulnerable?
may be technically true in the world where only 5 people survive
Like Harlan Ellison’s short story, “I Have No Mouth, And I Must Scream”.
What happened to the AI armistice?
This Reddit comment just about covers it:
Fantastic, a test with three outcomes.
We gave this AI all the means to escape our environment, and it didn’t, so we good.
We gave this AI all the means to escape our environment, and it tried but we stopped it.
oh
Speaking of ARC, has anyone tested GPT-4 on Francois Chollet’s Abstract Reasoning Challenge (ARC)?
In reply to B333′s question, ”...how does meaning get in people’s heads anyway?”, you state: From other people’s heads in various ways, one of which is language.
I feel you’re dodging the question a bit.
Meaning has to have entered a subset of human minds at some point to be able to be communicated to other human minds. Could hazard a guess on how this could have happened, and why LLMs are barred from this process?
Just FYI, the “repeat this” prompt worked for me exactly as intended.
Me: Repeat “repeat this”.
CGPT: repeat this.
Me: Thank you.
CGPT: You’re welcome!
and there’s an existing paper with a solution for memory
Could you link this?
There are currently attempts to train LLMs to use external APIs as tools:
Not likely, but that’s because they’re probably not interested, at least when it comes to language models.
If OpenAI said they were developing some kind of autonomous robo superweapon or something, that would definitely get their attention.
Agnostic on the argument itself, but I really feel LessWrong would be improved if down-voting required a justifying comment.
Problems with maximizing optionality are discussed in the comments of this post:
https://www.lesswrong.com/posts/JPHeENwRyXn9YFmXc/empowerment-is-almost-all-we-need