1. What am I missing from church?
(Or, in general, by lacking a religious/spiritual practice I share with others)
For the past few months I’ve been thinking about this question.
I haven’t regularly attended church in over ten years. Given how prevalent it is as part of human existence, and how much I have changed in a decade, it seems like “trying it out” or experimenting is at least somewhat warranted.
I predict that there is a church in my city that is culturally compatible with me.
Compatible means a lot of things, but mostly means that I’m better off with them than without them, and they’re better off with me than without me.
Unpacking that probably will get into a bunch of specifics about beliefs, epistemics, and related topics—which seem pretty germane to rationality.
2. John Vervaeke’s Awakening from the Meaning Crisis is bizzarely excellent.
I don’t exactly have handles for exactly everything it is, or exactly why I like it so much, but I’ll try to do it some justice.
It feels like rationality / cognitive tech, in that it cuts at the root of how we think and how we think about how we think.
(I’m less than 20% through the series, but I expect it continues in the way it has been going.)
Maybe it’s partially his speaking style, and partially the topics and discussion, but it reminded me strongly of sermons from childhood.
In particular: they have a timeless quality to them. By “timeless” I mean I think I would take away different learnings from them if I saw them at different points in my life.
In my work & research (and communicating this) -- I’ve largely strived to be clear and concise. Designing for layered meaning seems antithetical to clarity.
However I think this “timelessness” is a missing nutrient to me, and has me interested in seeking it out elsewhere.
For the time being I at least have a bunch more lectures in the series to go!
Can LessWrong pull another “crypto” with Illinois?I have been following the issue with the US state Illinois’ debt with growing horror.Their bond status has been heavily degraded—most states’ bonds are “high quality” with the standards agencies (moodys, standard & poor, fitch), and Illinois is “low quality”. If they get downgraded more they become a “junk” bond, and lose access to a bunch of the institutional buyers that would otherwise be continuing to lend.COVID has increased many states costs’, for reasons I can go into later, so it seems reasonable to think we’re much closer to a tipping point than we were last year.As much as I would like to work to make the situation better I don’t know what to do. In the meantime I’m left thinking about how to “bet my beliefs” and how one could stake a position against Illinois.Separately I want to look more into EU debt / restructuring / etc as its probably a good historical example of how this could go. Additionally previously the largest entity to go bankrupt in the USA was the city of Detroit, which probably is also another good example to learn from.
COVID has increased many states costs’, for reasons I can go into later, so it seems reasonable to think we’re much closer to a tipping point than we were last year.As much as I would like to work to make the situation better I don’t know what to do. In the meantime I’m left thinking about how to “bet my beliefs” and how one could stake a position against Illinois.
Is the COVID tipping point consideration making you think that the bonds are actually even worse than the “low quality” rating suggests? (Presumably the low ratings are already baked into the bond prices.)
Looking at this more, I think I my uncertainty is resolving towards “No”.Some things:- It’s hard to bet against the bonds themselves, since we’re unlikely to hold them as individuals- It’s hard to make money on the “this will experience a sharp decline at an uncertain point in the future” kind of prediction (much easier to do this for the “will go up in price” version, which is just buying/long)- It’s not clear anyone was able to time this properly for Detroit, which is the closest analog in many ways- Precise timing would be difficult, much more so while being far away from the stateI’ll continue to track this just because of my family in the state, though.Point of data: it was 3 years between Detroit bonds hitting “junk” status, and the city going bankrupt (in the legal filing sense), which is useful for me for intuitions as to the speed of these.
(Note: this might be difficult to follow. Discussing different ways that different people relate to themselves across time is tricky. Feel free to ask for clarifications.)
I’m reading the paper Against Narrativity, which is a piece of analytic philosophy that examines Narrativity in a few forms:
Psychological Narrativity—the idea that “people see or live or experience their lives as a narrative or story of some sort, or at least as a collection of stories.”
Ethical Narrativity—the normative thesis that “experiencing or conceiving one’s life as a narrative is a good thing; a richly [psychologically] Narrative outlook is essential to a well-lived life, to true or full personhood.”
It also names two kinds of self-experience that it takes to be diametrically opposite:
Diachronic—considers the self as something that was there in the further past, and will be there in the further future
Episodic—does not consider the self as something that was there in the further past and something that will be there in the further future
Wow, these seem pretty confusing. It sounds a lot like they just disagree on the definition of the world “self”. I think there is more to it than that, some weak evidence being discussing this concept of length with a friend (diachronic) who had a very different take on narrativity than myself (episodic).
I’ll try to sketch what I think “self” means. It seems that for almost all nontrivial cognition, it seems like intelligent agents have separate concepts (or the concept of a separation between) the “agent” and the “environment”. In Vervaeke’s works this is called the Agent-Arena Relationship.
You might say “my body is my self and the rest is the environment,” but is that really how you think of the distinction? Do you not see the clothes you’re currently wearing as part of your “agent”? Tools come to mind as similar extensions of our self. If I’m raking leaves for a long time, I start to sense myself as a the agent being the whole “person + rake” system, rather than a person whose environment includes a rake that is being held.
(In general I think there’s something interesting here in proto-human history about how tool use interacts with our concept of self, and our ability to quickly adapt to thinking of a tool as part of our ‘self’ as a critical proto-cognitive-skill.)
Getting back to Diachronic/Episodic: I think one of the things that’s going on in this divide is that this felt sense of “self” extends forwards and backwards in time differently.
I often feel very uncertain in my understanding or prediction of the moral and ethical natures of my decisions and actions. This probably needs a whole lot more writing on its own, but I’ll sum it up as two ideas having a disproportionate affect on me:
The veil of ignorance, which is a thought experiment which leads people to favor policies that support populations more broadly (skipping a lot of detail and my thoughts on it for now).
The categorical imperative, which I’ll reduce here as the principle of universalizability—a policy for actions given context is moral if it is one you would endorse universalizing (this is huge and complex, and there’s a lot of finicky details in how context is defined, etc. skipping that for now)
Both of these prompt me to take the perspective of someone else, potentially everyone else, in reasoning through my decisions. I think the way I relate to them is very Non-Narrative/Episodic in nature.
(Separately, as I think more about the development of early cognition, the more the ability to take the perspective of someone else seems like a magical superpower)
I think they are not fundamentally or necessarily Non-Narrative/Episodic—I can imagine both of them being considered by someone who is Strongly Narrative and even them imagining a world consisting of a mixture of Diachronic/Episodic/etc.
Priors are hard. Relatedly, choosing between similar explanations of the same evidence is hard.
I really like the concept of the Solomonoff prior, even if the math of it doesn’t apply directly here. Instead I’ll takeaway just this piece of it:
“Prefer explanations/policies that are simpler-to-execute programs”
A program may be simpler if it has fewer inputs, or fewer outputs. It might be simpler if it requires less memory or less processing.
This works well for choosing policies that are easier to implement or execute, especially as a person with bounded memory/processing/etc.
A simplifying assumption that works very well for dynamic systems is the Markov property.
This property states that all of the information in the system is present in the current state of the system.
One way to look at this is in imagining a bunch of atoms in a moment of time—all of the information in the system is contained in the current positions and velocities of the atoms. (We can ignore or forget all of the trajectories that individual atoms took to get to their current locations)
In practice we usually do this to systems where this isn’t literally true, but close-enough-for-practical-purposes, and combine it with stuffing some extra stuff into the context for what “present” means.
(For example we might define the “present” state of a natural system includes “the past two days of observations”—this still has the Markov property, because this information is finite and fixed as the system proceeds dynamically into the future)
I think that these pieces, when assembled, steer me towards becoming Episodic.
When choosing between policies that have the same actions, I prefer the policies that are simpler. (This feels related to the process of distilling principles.)
When considering good policies, I think I consider strongly those policies that I would endorse many people enact. This is aided by these policies being simpler to imagine.
Policies that are not path-dependent (for example, take into account fewer things in a person’s past) are simpler, and therefore easier to imagine.
Path-independent policies are more Episodic, in that they don’t rely heavily on a person’s place in their current Narratives.
I don’t know what to do with all of this.
I think one thing that’s going on is self-fulfilling—where I don’t strongly experience psychological Narratives, and therefore it’s more complex for me to simulate people who do experience this, which via the above mechanism leads to me choosing Episodic policies.
I don’t strongly want to recruit everyone to this method of reasoning. It is an admitted irony of this system (that I don’t wish for everyone to use the same mechanism of reasoning as me) -- maybe just let it signal just how uncertain I feel about my whole ability to come to philosophical conclusions on my own.
I expect to write more about this stuff in the near future, including experiments I’ve been doing in my writing to try to move my experience in the Diachronic direction. I’d be happy to hear comments for what folks are interested in.
When choosing between policies that have the same actions, I prefer the policies that are simpler.
Could you elaborate on this? I feel like there’s a tension between “which policy is computationally simpler for me to execute in the moment?” and “which policy is more easily predicted by the agents around me?”, and it’s not obvious which one you should be optimizing for. [Like, predictions about other diachronic people seem more durable / easier to make, and so are easier to calculate and plan around.] Or maybe the ‘simple’ approaches for one metric are generally simple on the other metric.
My feeling is that I don’t have a strong difference between them. In general simpler policies are both easier to execute in the moment and also easier for others to simulate.
The clearest version of this is to, when faced with a decision, decide on an existing principle to apply before acting, or else define a new principle and act on this.
Principles are examples of short policies, which are largely path-independent, which are non-narrative, which are easy to execute, and are straightforward to communicate and be simulated by others.
Thinking more about the singleton risk / global stable totalitarian government risk from Bostrom’s Superintelligence, human factors, and theory of the firm.
Human factors represent human capacities or limits that are unlikely to change in the short term. For example, the number of people one can “know” (for some definition of that term), limits to long-term and working memory, etc.
Theory of the firm tries to answer “why are economies markets but businesses autocracies” and related questions. I’m interested in the subquestion of “what factors given the upper bound on coordination for a single business”, related to “how big can a business be”.
I think this is related to “how big can an autocracy (robustly/stably) be”, which is how it relates to the singleton risk.
Some thoughts this produces for me:
Communication and coordination technology (telephones, email, etc) that increase the upper bounds of coordination for businesses ALSO increase the upper bound on coordination for autocracies/singletons
My belief is that the current max size (in people) of a singleton is much lower than current global population
This weakly suggests that a large global population is a good preventative for a singleton
I don’t think this means we can “war of the cradle” our way out of singleton risk, given how fast tech moves and how slow population moves
I think this does mean that any non-extinction event that dramatically reduces population also dramatically increases singleton risk
I think that it’s possible to get a long-term government aligned with the values of the governed, and “singleton risk” is the risk of an unaligned global government
So I think I’d be interested in tracking two “competing” technologies (for a hand-wavy definition of the term)
communication and coordination technologies—tools which increase the maximum effective size of coordination
soft/human alignment technologies—tools which increase alignment between government and governed
Did Bostrom ever call it singleton risk? My understanding is that it’s not clear that a singleton is more of an x-risk than its negative; a liberal multipolar situation under which many kinds of defecting/carcony factions can continuously arise.
I don’t know if he used that phrasing, but he’s definitely talked about the risks (and advantages) posed by singletons.
Future City Idea: an interface for safe AI-control of traffic lightsWe want a traffic light that* Can function autonomously if there is no network connection* Meets some minimum timing guidelines (for example, green in a particular direction no less than 15 seconds and no more than 30 seconds, etc)* Secure interface to communicate with city-central control* Has sensors that allow some feedback for measuring traffic efficiency or throughputThis gives constraints, and I bet an AI system could be trained to optimize efficiency or throughput within the constraints. Additionally, you can narrow the constraints (for example, only choosing 15 or 16 seconds for green) and slowly widen them in order to change flows slowly.This is the sort of thing Hash would be great for, simulation wise. There’s probably dedicated traffic simulators, as well.At something like a quarter million dollars a traffic light, I think there’s an opportunity here for startup.(I don’t know Matt Gentzel’s LW handle but credit for inspiration to him)
I expect that the functioning of traffic lights is regulated in a way that makes it hard for a startup to deploy such a system.