There’s quite a lot of people trying to deconfuse the concept, but it’s so cursed that I think clean sheet is often appropriate. Just split out whatever other things one means and ignore agency as a monolithic concept block.
Copernicus, Arabic numerals, Feynman diagrams, double helix, periodic table, cartesian coordinates, Einstein (and for that matter, Minkowski), information theory (bits), germ theory. These are just the super famous ones.
Let’s take Copernicus, so I would assume the ‘hard problem’ he solved was the modelling of planets and astronomical objects, right? A heliocentric model simplified the calculations needed, right? Also, was it seen at the time as a problem—as I understand it the Ptolemaic model, while needlessly complicated did do a good job of modelling astronomical objects. I’m not familiar enough with the history to know.
How can I apply this to my own problem solving, on a everyday level?
I’m vaguely aware of a lot of(?) discourse about (human) agency, most of which I ignore. But I don’t know that it’s really harder than any other virtue. Being consistently high integrity might be hard, but it’s not because the representation is wrong.
I thought the whole thread was about the difficulty of understanding agency, i.e, breaking down the concept of agency into more useful concepts, or just making it more well-defined.
I don’t think it’s hard to make LLMs “exhibit” agency, at least not in very similar ways to the ways it’s hard to make humans do so. On the other hand, discussion of AI risk that anthropomorphizes the AI usually grounds out in some confusion about, e.g, “what part of the AI system is the agent, if the whole thing is just a collection of floating point ops, and how do we square that frame with the frames we usually use to describe agency”. (Attempts to metaphorically map this to the human setting typically just result in confusion about the human setting, too.)
If a problem seems hard the representation is probably wrong.
Beginning to really dislike the word agency.
Do you have any inklings of what people should be doing instead of saying “agency?” (at whatever level of abstraction)
There’s quite a lot of people trying to deconfuse the concept, but it’s so cursed that I think clean sheet is often appropriate. Just split out whatever other things one means and ignore agency as a monolithic concept block.
Would love to see some examples of a “hard problem” where the representation was wrong. But maybe I’m not mathy enough to know the examples.
Copernicus, Arabic numerals, Feynman diagrams, double helix, periodic table, cartesian coordinates, Einstein (and for that matter, Minkowski), information theory (bits), germ theory. These are just the super famous ones.
Let’s take Copernicus, so I would assume the ‘hard problem’ he solved was the modelling of planets and astronomical objects, right? A heliocentric model simplified the calculations needed, right? Also, was it seen at the time as a problem—as I understand it the Ptolemaic model, while needlessly complicated did do a good job of modelling astronomical objects. I’m not familiar enough with the history to know.
How can I apply this to my own problem solving, on a everyday level?
Do you mean for humans or for AIs?
I’m vaguely aware of a lot of(?) discourse about (human) agency, most of which I ignore. But I don’t know that it’s really harder than any other virtue. Being consistently high integrity might be hard, but it’s not because the representation is wrong.
Are you talking about the difficulty of exhibiting (colloquially) high agency?
That’s my reading due to your comparison to integrity.
I don’t think that’s what OP was talking about, though.
Yes? What was your reading?
I thought the whole thread was about the difficulty of understanding agency, i.e, breaking down the concept of agency into more useful concepts, or just making it more well-defined.
I don’t think it’s hard to make LLMs “exhibit” agency, at least not in very similar ways to the ways it’s hard to make humans do so. On the other hand, discussion of AI risk that anthropomorphizes the AI usually grounds out in some confusion about, e.g, “what part of the AI system is the agent, if the whole thing is just a collection of floating point ops, and how do we square that frame with the frames we usually use to describe agency”. (Attempts to metaphorically map this to the human setting typically just result in confusion about the human setting, too.)
Thoughts on my last post?
https://www.lesswrong.com/posts/S5thoEmJMhEEuqzmG/we-are-confused-about-agency
Tbh I find the topic exhausting right now and have trouble reading any posts that talk about it
Fair enough, though it’s not trying to deconfuse the concept, it’s more like your post above.