If you asked a bunch of humans, would they make more sense than the AI?
This roughly matches the anecdotal evidence from my bubble. Something like 1 in 5 symptomatic cases get lingering symptoms that impair their ability to work and live, some fraction of that are not back to work after many months, and the long-term symptoms are almost an exact match for chronic fatigue/fibromyalgia. My hope is that, given the numbers of long covid, there will be some more research into what causes it, and it might end up benefiting those with CFS, who now mostly suffer and struggle in silence and isolation.
Abstraction is a compression algorithm for a computationally bounded agent, I don’t see how it is related to a “goal”, except insofar as a goal is just another abstraction, and they all have to work together for the agent to have a reasonable level of fidelity of the internal map of the world.
Jody Azzouni wrote a bunch of stuff about it. He talked about whether countries are “real” in his recent podcast interview https://www.preposterousuniverse.com/podcast/2022/01/03/178-jody-azzouni-on-what-is-and-isnt-real/ (if you’d rather read, a transcript link is in there, as well).
That is, there isn’t much to study in “abstract” agency, independent of the substrate it’s implemented on
Yeah, that’s the question, is agency substrate-independent or not, and if it is, does it help to pick a specific substrate, or would one make more progress by doing it more abstractly, or maybe both?
I am having trouble understanding the “free energy principle” being anything more than a control system that tries to minimize prediction error. If that’s all that is, there is nothing special about living systems, engineers have been building control systems for a long time. By that definition a Boston Dynamics walking robot is definitely a living system...
But such deflationary notions of agency seem deeply uncomfortable to a lot of people because they violate the very human-centric notion that lots of simple things don’t have “real” agency because we understand their mechanism, whereas things with agency seem to be complex in a way that we can’t easily understand how they work.
Yeah, that seems like a big part of it. I remember posting to that effect some years ago https://www.lesswrong.com/posts/NptifNqFw4wT4MuY8/agency-is-bugs-and-uncertainty
But given that we want to understand “real” agency, not some “mysterious agency” stemming from not understanding inner workings of some glorified thermostat, would it not make sense to start with something simple?
Right, something like that. A crow is smart though. That’s why I went picked an example of a single-cell organism.
“you can’t understand human intelligence without understanding amoeba intelligence”
That does sound less trivially true, I agree. I am not sure what the difference is exactly…
nor does studying amoebas seem likely to be on the shortest path to AGI.
I don’t see how this follows. Not studying amoebas, per se, but the basic blocks of intelligence starting somewhere around the level of an amoeba, whatever they might turn out to be.
It’s a good point that there are trade-offs, and highly optimized programs, even if they perform a simple function, are hard to understand without “being inside” one. That’s one reason I linked a post about an even simpler and well understood potentially “agentic” system, the Game of Life, though it focuses on a different angle, not “let’s see what it takes to design a simple agent in this game”.
Compare: “You can’t understand digital addition without understanding Mesopotamian clay token accounting”.
Well, if we didn’t understand digital addition and were only observing some strange electrical patterns on a mysterious blinking board, going back to the clay token accounting might not have been a bad idea. And we do not understand agency, so why not go back to basics?
Why not talk about the agency of electrons?
Indeed, why not? Where is the emergence threshold, or a zone? I would think this is where one would want to start understanding the concept of agency.
Good point, introspection is a better term.
A converse statement has been discussed over 50 years ago https://www.jstor.org/stable/2265034
Every discussion of decision theories that is not just “agents with max EV win”, where EV is calculated as a sum of “probability of the outcome times the value of the outcome” ends up fighting the hypothetical, usually by yelling that in zero-probability worlds someone’s pet DT does better than the competition. A trivial calculation shows that winning agents do not succumb to blackmail, stay silent in twin PD, one-box in all Newcomb’s variants and procreate in the miserable existence case. I don’t know if that’s what FDT does, but hopefully what a naive max EV calculation suggests.
The general approach that has been proven to work is “turn your idea into a potential money maker and be/find a person who can push it through to completion, like Musk/Jobs/Tiel”.
s an e-coli an agent? Does it have a world-model, and if so, what is it? Does it have a utility function, and if so, what is it? Does it have some other kind of “goal”
That’s the part I find puzzling in terms of lack of time devoted to it: how can one talk about agency without figuring out the basics like that. Though I personally argued that it might not even be possible to do in this post, which conjectured that vapor bubbles”maximizing their volume” in a pot of boiling water are not qualitatively different from bacteria going against sugar gradient in search of food.
Consider joining Fetlife and browsing around for something that piques your interest, and maybe finding a group of people who have matching needs.
Interestingly, fewer (non-religious) people argue against a reframing of immortality as “eternal youth with a voluntary check-out option”.