That Monte Carlo Method sounds a lot like dreaming.
Purged Deviator
Okay, this may be one of the most important articles I’ve read here. I already knew about OODA loops and how important they are, but putting names to the different failure modes, which I have seen and experienced thousands of times, gives me the handles with which to grapple them.
The main thing I want to say is thank you, I’m glad I didn’t have to write something like this myself, because I do not know if it would have been nearly as clear & concise or nearly as good!
Archaeologist here, I’ll be taking this comment as permission!
[comment removed by author]
I’ve wondered this a lot too. There is a lot of focus on and discussion about “superintelligent” AGI here, or even human-level AGI, but I wonder what about “stupid” AGI? When superintelligent AGI is still out of reach, is there not something still to be learned from a hypothetical AGI with the intelligence level of, say, a crow?
“You can’t understand digital addition without understanding Mesopotamian clay token accounting”
That’s sort of exactly correct? If you fully understand digital addition, then there’s going to be something at the core of clay token accounting that you already understand. Complex systems tend to be built on the same concepts as simpler systems that do the same thing. If you fully understand an elevator, then there’s no way that ropes & pulleys can still be a mystery to you, right? And to my knowledge, studying ropes & pulleys is a step in how we got to elevators, so it would make sense to me that going “back to basics”, i.e. simpler real models, could help us make something we’re still trying to build.
Even if I disagree with you, thank you for posing the example!
An explanation that I’ve seen before of “where agency begins” is when an entity executes OODA loops. I don’t know if OODA loops are a completely accurate map to reality, but they’ve been a useful model so far. If someone were going to explore “where agency begins” OODA loops might be a good starting point.
I feel like an article about “what agency is” must’ve already been written here, but I don’t remember it. In any case, that article on agency in Conway’s Life sounds like my next stop, thank you for linking it!
I didn’t pick it up from any reputable sources. The white paper on military theory that created the term was written many years ago, and since then I’ve only seen that explanation tossed around informally in various places, not investigated with serious rigor. OODA loops seem to be seldom discussed on this site, which I find kinda weird, but a good full explanation of them can be found here: Training Regime Day 20: OODA Loop
I tried to figure out on my own whether executing an OODA loop was necessary & sufficient condition for something to be an intelligent agent, (part of an effort to determine what the smallest & simplest thing which could still be considered true AGI might be) and I found that while executing OODA loops seems necessary for something to have meaningful agency, doing so is not sufficient for something to be an intelligent agent.
Thank you for your interest, though! I wish I could just reply with a link, but I don’t think the paper I would link to has been written yet.
What do you disagree about?
I don’t know. Possibly something, probably nothing.
the essence of [addition as addition itself]…
The “essence of cognition” isn’t really available for us to study directly (so far as I know), except as a part of more complex processes. Finding many varied examples may help determine what is the “essence” versus what is just extraneous detail.
While intelligent agency in humans is definitely more interesting than in amoebas, knowing exactly why amoebas aren’t intelligent agents would tell you one detail about why humans are, and may thus tell you a trait that a hypothetical AGI would need to have.
I’m glad you liked my elevator example!
Well, it has helped me understand & overcome some of the specific ways that akrasia affects me, and it has also helped me understand how my own mind works, so I can alter and (hopefully) optimize it.
With priors of 1 or 0, Bayes rule stops working permanently. If something is running on real hardware, then it has a limit on its numeric precision. On a system that was never designed to make precise mathematical calculations, one where 8⁄10 doesn’t feel significantly different from 9⁄10, or one where “90% chance” feels like “basically guaranteed”, the level of numeric precision may be exceedingly low, such that it doesn’t even take a lot for a level of certainty to be either rounded up to one or rounded down to 0.
As always, thanks for the post!
The Definition of Good and Evil
Epistemic Status: I feel like I stumbled over this; it has passed a few filters for correctness; I have not rigorously explored it, and I cannot adequately defend it, but I think that is more my own failing than the failure of the idea.
I have heard said that “Good and Evil are Social Constructs”, or “Who’s really to say?”, or “Morality is relative”. I do not like those at all, and I think they are completely wrong. Since then, I either found, developed, or came across (I don’t remember how I got this) a model of Good and Evil, which has so far seemed accurate in every situation I have applied it to. I don’t think I’ve seen this model written explicitly anywhere, but I have seen people quibble about the meaning of Good & Evil in many places, so whether this turns out to be useful, or laughably naïve, or utterly obvious to everyone but me, I’d rather not keep it to myself anymore.
The purpose of this, I guess, is that when the map has become so smudged and smeared, and some people question whether it ever corresponded to the territory at all, to now figure out what part of the territory this part of the map was supposed to refer to. I will assume that we have all seen or heard examples of things which are Good, things which are Evil, things which are neither, and things which are somewhere in between. An accurate description of Good & Evil should accurately match those experiences a vast majority (all?) of the time.
It seems to me, that among the clusters of things in possibility space, the core of Good is “to help others at one’s own expense” while the core of Evil is “to harm others for one’s own benefit”.
In my limited attempts at verifying this, the Goodness or Evilness of an action or situation has so far seemed to correlate with the presence, absence, and intensity of these versions of Good & Evil. Situations where one does great harm to others for one’s own gain seem clearly evil, like executing political opposition. Situations where one helps others at a cost to oneself seem clearly good, like carrying people out of a burning building. Situations where no harm nor help is done, and no benefit is gained nor cost expended, seem neither Good nor Evil, such as a rock sitting in the sun, doing nothing. Situations where both harm is done & help is given, and where both a cost is expended and a benefit is gained, seem both Good and Evil, or somewhere in between, such as rescuing an unconscious person from a burning building, and then taking their wallet.
The correctness of this explanation depends on whether it matches others’ judgements of specific instances of Good or Evil, so I can’t really prove its correctness from my armchair. The only counterexamples I have seen so far involved significant amounts of motivated reasoning (someone who was certain that theft wasn’t wrong when they did it).
I’m sure there are many things wrong with this, but I can’t expect to become better at rationality if I’m not willing to be crappy at it first.
Here’s an analysis by Dr. Robert Malone about the Ukraine biolabs, which I found enlightening:
I glean that “biolab” is actually an extremely vague term, and doesn’t specify the facility’s exact capabilities at all. They could very well have had an innocuous purpose, but Russia would’ve had to treat them as a potential threat to national security, in the same way that Russian or Chinese “biolabs” in Mexico might sound bad to the US, except Russia is even more paranoid.
From things I have previously heard about drones, I would be uncertain what training is required to operate them, and what limitations there are for weather in which they can & cannot fly. I know that being unable to fly in anything other than near-perfect weather conditions has been a problem of drones in the past, and those same limitations do not apply to ground-based vehicles.
I kinda wonder if this is what happened with Eliezer Yudkowsky, especially after he wrote Harry Potter and the Methods of Rationality?
I do agree for the most part. Robotic warfare which can efficiently destroy your opponent’s materiel, without directly risking your own materiel & personnel is an extremely dominant strategy, and will probably become the future of warfare. At least warfare like this, as opposed to police actions.
For self-defense, that’s still a feature, and not a bug. It’s generally seen as more evil to do more harm when defending yourself, and in law, defending youself with lethal force is “justifyable homicide”, it’s specifically called out as something much like an “acceptable evil”. Would it be more or less evil to cause an attacker to change their ways without harming them? Would it be more or less evil to torture an attacker before killing them?
″...by not doing all the Good...” In the model, it’s actually really intentional that “a lack of Good” is not a part of the definition of Evil, because it really isn’t the same thing. There are idiosyncracies in this model which I have not found all of yet. Thank you for pointing them out!
The first paragraph is equivalent to saying that “all good & evil is socially constructed because we live in a society”, and I don’t want to call someone wrong, so let me try to explain...
An accurate model of Good & Evil will hold true, valid, and meaningful among any population of agents: human, animal, artificial, or otherwise. It is not at all depentent on existing in our current, modern society. Populations that do significant amounts of Good amongst each other generally thrive & are resilient (e.g. humans, ants, rats, wolves, cells in any body, many others), even though some individuals may fail or die horribly. Populations which do significant amounts of Evil tend to be less resilient, or destroy themselves (e.g. high crime areas, cancer cells), even though certain members of those populations may be wildly successful, at least temporarily.
This isn’t even a human-centric model, so it’s not “constructed by society”. It seems to me more likely to be a model that societies have to conform to, in order to exist in a form that is recognizeable as a society.
I apologize for being flippant, and thank you for replying, as having to overcome challenges to this helps me figure it out more!
you enjoy peacefully reading a book by yourself, and other people hate this because they hate you and they hate it when you enjoy yourself
The problem with making hypothetical examples, is when you make them so unreal as to just be moving words around. Playing music/sound/whatever loud enough to be noise pollution would be similar to the first example. Less severe, but similar. Spreading manure on your lawn so that your entire neighborhood stinks would also be less severe, but similar. But if you’re going to say “reading” and then have hypothetical people not react to reading in the way that actual people actually do, then your hypothetical example isn’t going to be meaningful.
As for requiring consciousness, that’s why I was judging actions, not the agents themselves. Agents tend to do both, to some degree.
Oof, be wary of Tim Ferriss, for he is a giant phony. I bought one of his books once, and nearly every single piece of advice in it was a bad generalization from a single study, and all of it was either already well known outside of the book, or ineffective, or just plain wrong. I have had great luck by immediately downgrading the trustworthiness of anything that mentions him, and especially anything that treats him as an authority. I have found the same with NLP. Please don’t join that club.
Tim Ferriss is an utterly amoral agent. His purpose is to fill pages with whatever you will buy, and sell them to you, for money. Beyond that, he does not care, at all. I expect he has read Robert Cialdini’s “Influence”, but only as a guidebook to the most efficient ways to extract money from suckers.
This is just a warning to all readers.