Epistemic status: Neither unique nor surprising, but something I felt like idly cataloguing.
An interesting example of statistical illiteracy in the field: This complaint thread about the shuffling algorithm on Magic: the Gathering Arena, a digital version of the card game. Thousands of unique players seem to be represented here.
MTG players who want to win games have a strong incentive to understand basic statistics. Players like Frank Karsten have been working for years to explain the math behind good deckbuilding. And yet, the “rigged shuffler” is a persistent belief even among reasonably engaged players; I’ve seen quite a few people try to promote it on my stream, which is not at all aimed at beginners.
(The shuffler is, of course, appropriately random, save for some “hand smoothing” in best-of-one matches to increase the chance of a “normal” draw.)
A few quotes from the thread:
How is that no matter how many people are playing the game, or how strong your deck is, or how great your skill level, I bet your winning percentage is 30% or less. This defies the laws of probability.
(No one ever seems to think the shuffler is rigged in their favor.)
As I mentioned in a prior post you never see these problems when they broadcast a live tournament.
(People who play in live tournaments are much better at deckbuilding, leading to fewer bad draws. Still, one recent major tournament was infamously decided by a player’s atrocious draw in the last game of the finals.)
In the real world, land draw will not happens as frequent as every turns for 3 times or more. Or less than 2 to 3 turns, not drawing a land
(Many people have only played MTG as a paper game when they come to Arena. In paper, it’s very common for people to “cheat” when shuffling by sorting their initial deck in a particular way, even with innocuous intent. When people are exposed to true randomness, they often can’t tolerate it.)
Other common conspiracy theories about Arena:
“Rigged matchmaking” (the idea that the developers somehow know which decks will be good against your deck, and ensure that you are matched up against it; again, I never see this theory in reverse)
“Poker hands” (the idea that people get multiple copies of a card more often than would be expected)
“50% bias” (the idea that the game arranges good/bad draws to keep players at a 50% win rate; admirably, these players recognize that they do draw well sometimes, but they don’t understand what it means to be in the middle of a binomial distribution)
Players of Battle for Wesnoth often accuse random number generator of being broken, e.g. when their unit has 3 attacks, each of them has independently 70% chance to hit, and all three attacks happen to miss. But the chance of that happening is actually 2,7%, and if a level takes twenty or more turns, and in each turn several units attack, this is likely to happen several times per level.
2021 Week 2 Review 10 Jan − 16 Jan: Co-Working FTW! Also...Stop Signalling, WTB Quantitative Data and Reasoning!
I recall enjoying last week, but wow did I endorse each day strongly in my shortforms. It is true that last week was substantially better than the week prior to it, and I don’t recall nor have records of bad things or bad feels occurring last week, so I’ll stand by my strong endorsements of the days last week.
Thus, what an excellent week!
Re, Stop signalling:
“I spent most of it going through things, throwing away, organizing, sorting, and packing said things depending on what they were, and got a lot done in preparation for moving because of that. I’m looking forward to finishing up my resume tomorrow and getting feedback on it then finishing up my profile on the job sites I made an account on.” (from 15 Jan shortform)
Without quantitative data backing up what was said above, I effectively signalled for that entire shortform and the truth of that day was obscured. Really I only did 2-3 hours (memory estimation, don’t trust it much) of work and spent the rest of the day occupied with watching television shows or being social. It’s okay to have a lazy day from time to time, but this wasn’t a designated rest or “sabbath” day, plus, I obscured the truth by signalling (one could also say “employing rhetoric”, potentially). I’ve done a bit of signalling in other posts throughout the shortforms, but this one was the worst yet.
Why I care so much about not signalling, being truthful, being quantitative:
These shortforms are not just me howling into the void about my life, I’m trying to improve myself and my life! I write these shortforms so that I have data from each day to reflect on plus use in aiding memory / recall, am somewhat publicly accountable, and keep track of what my explicit goals are and how well I do at making progress to or achieving them. I need more data (and accurate data!) about my own life and actions so that I can become a more effective person! Signalling and obscuring the truth are antithetical to what I’m trying to do and who I want to be, so I’ll stop that nonsense immediately. Any remaining signalling will be the kind of noise and signalling you either can’t get rid of because we’re tribal animals, us human beings, or will be based on truthful quantitative data and thus an endorsement of particular actions. In short, be truthful and quantitative or be square AND unhelpful to ones own self.
How I will be obtaining more (and more accurate) data about my actions and my life:
I don’t have an internal clock that’s consciously accessible to me plus I rarely experience the feeling of time passing, and am notoriously bad at noticing how much time it takes me to do something, the passage of time itself, and timeliness.
Using gtimelog on my desktop lets me keep an accurate time log of what I do and how long it takes me to do. Downside: everything must be manually entered. Upside: it’s really simple and easy to use, plus I’ve gotten good at both remembering to enter things and at using the software itself. Intervention: create twice daily repeating reminders on my phone that will say: “Did you do the time log? Go do the time log” and will occur in the early afternoon and late evening.
Wearing a watch helps me observe the time when I’m up and moving around, so I’ll commit to wearing my watch more except on rest / sabbath days. My watch has stopwatch, timer, and alarm apps so I can use those to aid in time-related things as well. I’ll check its app store to see if there’s a decent time-log app as well.
I will experiment next week with setting alarms at different time intervals (e.g. I’ll try setting a “hey check the time” alarm to go off once every 2 hours initially) to see if that helps me be more cognizant of both time passing plus the actions I’m taking or not taking during that time.
I will take some time this next week to search for time log apps that work with the platforms I use (Linux [primary], Firefox, macOS, and iOS mostly) and see what options are out there.
I will use a notes app on my phone and carry around a notepad so that I can jot down whatever it is I’m working on or doing at any given moment.
I’ll find a calorie counting app and actually use the damn thing. I’ve always found doing this particularly tedious and annoying, but there’s no getting around calorie counting if I want to be effective at accomplishing my weight loss goals.
In addition to the above methods, I will be actively searching for more options that help with this endeavour and try to quantify even more parts of my life. Any suggestions?
Last week I started virtually co-working and did 3 or 4 sessions, for about 6 hours in total. I’m pushing for 10 hours of virtual co-working next week, time permitting (I am in the process of packing and getting read to move, so...things might become real chaotic real fast). I’ll establish regularly scheduled sessions that repeat, should help with consistency over the long term.
My main goals for the week:
Look for software dev /eng jobs, preferably fully remote
Practise coding everyday, in particular, practise algorithms, architecture-building, and data structures.
Pack and get ready for moving
Get vaccinated
Continue doing the several things I’ve been doing either since late December or have recently identified as good for me.
writing shortforms, weekly reviews, etc.
exercising daily; focus on strength training over cardio now, but still do some cardio
be virtually social each day
virtual co-working!
calorie counting
time logging
Here’s to another great week!
What are you working on and trying to accomplish?
I listened to Wlad Roerich’s Background Mode 0.1 while writing this. It took me 60 minutes (1 hour) to write this weekly review and then publish it.
An Alignment Paradox: Experience from firms shows that higher levels of delegation work better (high level meaning fewer constraints for the agent). This is also verycommonpracticaladvice for managers. I have also received this advice myself and seen this work in practice. There is even a management card game for it: Delegation Poker. This seems to be especially true in more unpredictable environments. Given that we have intelligent agents giving them higher degrees of freedom seems to imply more ways to cheat, defect, or ‘escape’. Even more so in environments that can be controlled to lesser degrees. How can that be true? What is making this work and can some underlying principle be found that would allow this to be applied to AI?
Most people are naturally pro-social. (No, this can’t be applied to AI.) Given a task, they will try to do it well, especially if they feel like their results are noticed and appreciated.
A cynical hypothesis is that most of the things managers do are actively harmful to the project; they are interfering with the employees trying to do their work. The less the manager does, the better the chances of the project. “Delegation” is simply when manager stops actively hurting the project and allows others to do their best.
The reason for this is that most of the time, there is no actually useful work for the manager. The sane thing would be to simply sit down and relax, and wait for another opportunity for useful intervention to arise. Unfortunately, this is not an option, because doing this would most likely get the manager fired. Therefore managers create bullshit work for themselves. Unfortunately, by the nature of their work, this implies creating bullshit work for others. In addition to this, we have the corrupted human hardware, with some managers enjoying power trips and/or believing they know everything better than people below them in the hierarchy.
When you create a manager role in your company, it easily becomes a lost purpose after the original problems are solved but the manager wants to keep their job.
I don’t like cynical views and while I have encountered politics and seen such cases I don’t think that paints a realistic view. But I will run with your cynical view and you won’t like it ;-)
So we have these egotistical managers that only want to keep their job and raise in ranks. Much closer to non-social AI, right? How come more delegation works better for them too?
Mind you, I might be wrong and it works less and less the further up you go. It might be that you are right and this works only because people have enough social behavior hard-wired that makes delegation work.
But I have another theory: Limited processing capacity + Peter Principle.
It makes sense to delegate more—especially in unpredictable environments—because that reduces your processing load of dealing with all the challenging tasks and moves it to your subordinates. This leaves less capacity for them to schema against you and gives you the capacity to scheme against your superior. Und so up the chain. Capable subordinates that can deal with all the stuff you throw at them have to be promoted so they have more work to do until they reach capacity too. So sometimes the smart move is to refuse promotion :-)
I guess we agree that limited processing capacity means that interfering with the work of your underlings—assuming they are competent and spending enough of their processing capacity on their tasks—is probably a bad move. It means taking the decision away from the person who spends 8 hours a day thinking about the problem, and assigning it to a person who spent 30 seconds matching the situation to the nearest cliche, because that’s all they had time for between the meetings.
This might work if the person is such a great expert that their 30 seconds are still extremely valuable. That certainly is possible; someone with lots of experience might immediately recognize a frequently-made mistake. It is also is the kind of assumption that Dunning and Kruger would enjoy researching.
I might be wrong and it works less and less the further up you go
That would make sense. When you are a lowest-level manager, if you stop interfering, it allows the people at the bottom to focus on their object-level tasks. But if you are a higher-level manager, how you interact with the managers below you does not have a direct impact on the people at the bottom. Maybe you manage your underlings less, and they copy your example and give more freedom to the people at the bottom… or maybe you just gave them more time to interfere.
So sometimes the smart move is to refuse promotion
So you have more time to scheme… but you have to stay low in the pyramid. Not sure what you scheme about then. (Trying to get to the top in one huge jump? Sounds unlikely.)
I was a team leader twice. The first time it happened by accident. There was a team leader, three developers (me one of them), and a small project was specified. On the first day, something very urgent happened (I don’t remember what), the supposed leader was re-assigned to something else, and we three were left without supervision for unspecified time period. Being the oldest and most experienced person in the room, I took initiative and asked: “so, guys, as I see it, we use an existing database, so what needs to be done is: back-end code, front-end code, and some stylesheets; anyone has a preference which part he would like to do?” And luckily, each of us wanted to do a different part. So the work was split, we agreed on mutual interfaces, and everyone did his part. It was nice and relaxed environment: everyone working alone at their own speed, debating work only as needed, and having some friendly work-unrelated chat during breaks.
In three months we had the project completed; everyone was surprised. The company management assumed that we will only “warm up” during those three months, and when the original leader returns, he will lead us to the glorious results. (In a parallel Everett branch, where he returned shortly before we finished the product, I wonder whether he got a bonus and promotion.) Then everything returned to normal: more micromanagement, lower productivity, people burned out.
The second time, we were a small group working together for some time already. Then our manager quit. No one knew who would get the role next, and in an attempt to deflect a possible danger, I volunteered to do it on top of my usual work. What happened was that everyone worked exactly the same as they did before, only without the interruptions and extra stress caused by management, and I got some extra paperwork which I gradually reduced to minimum. The work progressed so well—no problems, no complaints from users, the few tickets we got almost always turned out to be a problem outside our project—that higher management concluded that there is apparently too litle work to do on our project, so the team members were assigned to also work on extra projects in parallel.
Perhaps my short experience is not representative, but it suggests that a manager, merely by not existing, could already create a top-decile work environment in terms of both work satisfaction and productivity. The recommended mantra to recite every day is: “first, do no harm”. My experience also suggests that this approach will ultimately get punished, despite the increased productivity: the expected outcome is more work for no pay raise until you break, or just being told to return to the old ways without any explanation why. I assume I am missing some crucial maze-navigating skills; for someone trying to be a professional manager this would be fatal; luckily I do not have this ambition.
It is quite possible that this approach only works when there is a good team: in both cases I worked with people who were nice above average. If you had a dominant asshole in the team, this could easily become a disaster: the power vacuum left by a passive manager would simply be replaced by an ambitious alpha male, who would probably soon be promoted into the role of formal leader. So perhaps the companies play it safe by using a widely applicable strategy that happens to be inferior in the case of good employees who also happen to be good people; quite likely this is because the companies simply cannot recognize such people.
Is there a leadership level beyond this? Sure, but in my quarter century of career I have only met such manager once. What he did was basically meeting each of his people once a day in the morning (this was long before I heard about “daily standups” and such) and talking with him for 5 or 10 minutes; with each team member separately, in the manager’s room. He asked the usual questions “what did you do yesterday?”, “what is your plan for today?”, “are there any obstacles to your work?”, but there was zero judgment, even if you said things like “yesterday I had a really bad day, I tried some things but at the end it was wrong and I had to throw it all away, so today I am starting from scratch again”; essentially he treated you like an adult person and assumed that whatever you did, there was a good reason for that. Before and after the report, a very short small talk; it helped that he was extremely intelligent and charismatic, so for many people this was the best part of the day. Also, the obstacles in work that you mentioned, he actually did something about them during the day, and always reported the outcome to you the next morning. Shortly, for the first and the last time so far in my career, I had a regular feeling that someone listens to me and cares about what I do (as opposed to just whipping me to run faster in order to meet arbitrary deadlines, randomly interrupting me for no good reason, second-guessing my expert opinion, etc.).
So yes, there is a level beyond “not doing harm” and it is called “actually motivating and helping”, but I guess most managers dramatically overestimate their ability to do it… and when they try regardless, and ignore the feedback, they actively do harm.
Thank you a lot. Your detailed account really helps me understand your perspective much better now. I can relate to your experience in teams where micromanagement slows things down and prevents actually relevant solutions. I have been in such teams. I can also relate to it being advantageous when a leader of questionable value is absent. I have been in such a team too—though it didn’t have such big advantages as in your case. That was mostly because this team was part of a bigger organization and platform where multiple teams had to work together to something done, e.g. agree on interfaces with other teams. And in the absence of clear joint goals that didn’t happen. Now you could argue that then the management one level up was not doing its job well and I agree. But the absence of that management wouldn’t have helped either—it could have led to a) each team trying to solve some part of the problem. It could have led to b) some people from both teams getting together and agreeing on interfaces and joining goals or it could have led to c) the teams agreeing on some coordination for both teams. a) in most cases leads to some degree of chaos and failure and b) establishes some kind of leadership on the team level (like you did in your first example) and c) results over time in some leadership one level up. I’d argue that some kind of coordination structure is needed. Where did the project you did implement in your first case come from? Somebody figure out that it would provide value to the company. Otherwise, you might have built a beautiful project that didn’t actually provide value. I think we agree that the company you worked in did have some management that provided value (I hope it was no moral maze). And I agree that a lot of managers do not add value and sometimes decrease it. On the other hand, I have worked for great team leads and professional managers. People who would listen, let us make our own decisions, give clear goals but also limits, help, and reduce impediments. This is really not a secret art. The principles are well-known (for a funny summary see e.g. Leadersheep). But it turns out that building a big organization is hard. Politics is real and professional management is still mostly a craft. It rarely approaches something you can call engineering much less hard science. And I am looking for that. That’s part of why I wrote this shortform on processes and roles. Everybody is just cooking with water and actual organization structures often leave something to be desired. I guess that’s why we do see extraordinary companies like Amazon sometimes—that hit on a sweet spot. But by talent or luck, not by science. And the others have to make do with inadequate solutions. Including the managers of which you maybe saw more than I did.
this team was part of a bigger organization and platform where multiple teams had to work together to something done, e.g. agree on interfaces with other teams. And in the absence of clear joint goals that didn’t happen.
I have seen this happen also in a small team. Two or three guys started building each his own part independently, then it turned out those parts could not be put together; each of them insisted that others change their code to fit his API, and refused to make the smallest change in his API. It became a status fight that took a few days. (I don’t remember how it was resolved.)
In another company, there was a department that took care of everyone’s servers. Our test server crashed almost every day and had to be restarted manually; we had to file a ticket and wait (if it was after 4PM, the server was restarted only the next morning) because we did not have the permission to reset the server ourselves. It was driving us crazy; we had a dedicated team of testers, and half of the time they were just waiting for the server to be restarted; then the week before delivery we all worked overtime… that is, until the moment the server crashed again, then we filed the ticket and went home. We begged our manager to let us pool two hundred bucks and buy a notebook that we could turn into an alternative testing environment under our control, but of course that would be completely against company policy. Their manager refused to do anything about it; from their perspective, it meant they had every day one support ticket successfully closed by merely clicking a button; wonderful metric! From the perspective of our manager’s manager, it was a word against a word, one word coming from the team with great metrics and therefore more trustworthy. (The situation never got solved, as far as I know.)
...I should probably write a book one day. Except that no one would ever hire me afterwards. So maybe after I get retired...
So, yes, there are situations that require to be solved by greater power. In long term it might even make sense to fire a few people, but the problem is that these often seem to be the most productive ones, because other people are slowed down by the problems they cause.
Where did the project you did implement in your first case come from? Somebody figure out that it would provide value to the company. Otherwise, you might have built a beautiful project that didn’t actually provide value. I think we agree that the company you worked in did have some management that provided value (I hope it was no moral maze).
Yeah, but we have two different meanings of the word “management” here. Someone who decides which project to do—this is useful and necessary. Or someone who interrupts you every day while you are trying to work on that project—I can imagine that in some teams this may also be necessary, but arguably then your problem is the team you have (at least some parts of it). Motte and bailey of management, sort of.
From epistemic perspective, I guess the problem is that if you keep micro-managing people all the time, you can never learn whether your activity actually adds or removes value, simply because there is nothing to compare to. (I guess the usual null hypothesis is “nobody ever does anything”, which of course make any management seem useful; but is it true?) Looking at the incentives and power relations, the employee at the bottom doesn’t have an opportunity to prove they could work just as well without the micro-management, and the manager doesn’t have an incentive to allow the experiment. There is also the “heads I win, tail you lose” aspect where bad employee performance is interpreted as necessity of more management, but good employee performance is interpreted as good management, so either way management is perceived as needed.
This is really not a secret art. The principles are well-known (for a funny summary see e.g. Leadersheep).
Yep. That’s a very good summary. Heh, I fail hard at step 1 (creating, or rather communicating a strong vision).
But it turns out that building a big organization is hard. Politics is real and professional management is still mostly a craft. It rarely approaches something you can call engineering much less hard science.
Seems analogical to social sciences: in theory, they are much more difficult than math or physics, so it would make sense if smarter people studied them; in practice, it’s the other way round, because if something is too difficult to do properly, it becomes easy to bullshit your way to the top, and intelligent people switch to something where being intelligent gives you a clear comparative advantage.
Good luck to you! I suppose your chances will depend on how much autonomy you get; it is hard to do things right, if the sources of problem are beyond your control. However, if you become a great manager and your people will like you, perhaps in the future you can start your own company and give them a call whether they would like to work for you again.
Thank you. I agree with your view. Motte and bailey of management yep. I especially liked this:
Seems analogical to social sciences: in theory, they are much more difficult than math or physics, so it would make sense if smarter people studied them; in practice, it’s the other way round, because if something is too difficult to do properly, it becomes easy to bullshit your way to the top, and intelligent people switch to something where being intelligent gives you a clear comparative advantage.
I enjoyed today, but it definitely wasn’t a very productive day. I woke up late, then jumped in for an hour of virtual co-working and looked a resume templates / ideas + discussed resume and job site profile strategies. Afterwards I cleaned and organized the house for an hour and a half, followed by showering + getting ready. I drove into town for an early dinner outdoors with three friends then picked up my Dad from the airport, and have been relaxing since getting home.
I think the best thing to do is go to bed early and wake up tomorrow ready for a new day!
Infra-Bayesianism can be naturally understood as semantics for a certain non-classical logic. This promises an elegant synthesis between deductive/symbolic reasoning and inductive/intuitive reasoning, with several possible applications. Specifically, here we will explain how this can work for higher-order logic. There might be holes and/or redundancies in the precise definitions given here, but I’m quite confident the overall idea is sound.
For simplicity, we will only work with crisp infradistributions, although a lot of this stuff can work for more general types of infradistributions as well. Therefore, □X will denote the space of crisp infradistribution. Given μ∈□X, S(μ)⊆ΔX will denote the corresponding convex set. As opposed to previously, we will include the empty-set, i.e. there is ⊥X∈□X s.t. S(⊥X)=∅. Given p∈ΔX and μ∈□X, p:μ will mean p∈S(μ). Given μ,ν∈□X, μ⪯ν will mean S(μ)⊆S(ν).
Syntax
Let Tι denote a set which we interpret as the types of individuals (we allow more than one). We then recursively define the full set of types T by:
0∈T (intended meaning: the uninhabited type)
1∈T (intended meaning: the one element type)
If α∈Tι then α∈T
If α,β∈T then α+β∈T (intended meaning: disjoint union)
If α,β∈T then α×β∈T (intended meaning: Cartesian product)
If α∈T then (α)∈T (intended meaning: predicates with argument of type α)
For each α,β∈T, there is a set F0α→β which we interpret as atomic terms of type α→β. We will denote V0α:=F01→α. Among those we distinguish the logical atomic terms:
idα∈F0α→α
0α∈F00→α
1α∈F0α→1
prαβ∈F0α×β→α
iαβ∈F0α→α+β
Symbols we will not list explicitly, that correspond to the algebraic properties of + and × (commutativity, associativity, distributivity and the neutrality of 0 and 1). For example, given α,β∈T there is a “commutator” of type α×β→β×α.
Assume that for each n∈N there is some Dn⊆□[n]: the set of “describable” infradistributions (for example, it can be empty, or consist of all distributions with rational coefficients, or all distributions, or all infradistributions; EDIT: it is probably sufficient to only have the fair coin distribution in D2 in order for it to be possible to approximate all infradistributions on finite sets). If μ∈Dn then ┌μ┐∈V(∑ni=11)
We recursively define the set of all terms Fα→β. We denote Vα:=F1→α.
If f∈F0α→β then f∈Fα→β
If f1∈Fα1→β1 and f2∈Fα2→β2 then f1×f2∈Fα1×α2→β1×β2
If f1∈Fα1→β1 and f2∈Fα2→β2 then f1+f2∈Fα1+α2→β1+β2
If f∈Fα→β then f−1:F(β)→(α)
If f∈Fα→β and g∈Fβ→γ then g∘f∈Fα→γ
Elements of V(α) are called formulae. Elements of V(1) are called sentences. A subset of V(1) is called a theory.
Semantics
Given T⊆V(1), a modelM of T is the following data. To each α∈T, there must correspond some compact Polish space M(t) s.t.:
M(0)=∅
M(1)=pt (the one point space)
M(α+β)=M(α)⊔M(β)
M(α×β)=M(α)×M(β)
M((α))=□M(α)
To each f∈Fα→β, there must correspond a continuous mapping M(f):M(α)→M(β), under the following constraints:
0, 1, id, pr, i, diag and the “algebrators” have to correspond to the obvious mappings.
M(=α)=⊤diagM(α). Here, diagX⊆X×X is the diagonal and ⊤C∈□X is the sharp infradistribution corresponding to the closed set C⊆X.
Consider α∈T and denote X:=M(α). Then, M(()α)=⊤□X⋉id□X. Here, we use the observation that the identity mapping id□X can be regarded as an infrakernel from □X to X.
M(⊥)=⊥pt
M(⊤)=⊤pt
S(M(∧)(μ,ν))=S(μ)∩S(ν)
S(M(∨)(μ,ν)) is the convex hull of S(μ)∪S(ν)
Consider α,β∈T and denote X:=M(α), Y:=M(β) and pr:X×Y→Y the projection mapping. Then, M(∃αβ)(μ)=pr∗μ.
Consider α,β∈T and denote X:=M(α), Y:=M(β) and pr:X×Y→Y the projection mapping. Then, p:M(∀αβ)(μ) iff pr−1∗(p)⊆S(μ).
M(f1×f2)=M(f1)×M(f2)
M(f1+f2)=M(f1)⊔M(f2)
M(f−1)(μ)=M(f)−1(μ). Notice that pullback of infradistributions is always defined thanks to adding ⊥ (the empty set infradistribution).
M(g∘f)=M(g)∘M(f)
M(┌μ┐)=μ
Finally, for each ϕ∈T, we require M(ϕ)=⊤pt.
Semantic Consequence
Given ϕ∈V(1), we say M⊨ϕ when M(ϕ)=⊤pt. We say T⊨ϕ when for any model M of T, M⊨ϕ. It is now interesting to ask what is the computational complexity of deciding T⊨ϕ. Classical 0-th order logic is embedded into infra-Bayesian higher-order logic, so the complexity is at least co-NP. On the other hand, known results about the word problem for lattices suggest that it’s plausible to be polynomial time for infra-Bayesian 0-th order logic. So, it is plausible that semantic consequence in infra-Bayesian higher-order logic has much lower complexity than its classical counterpart (the latter is not even recursively enumerable).
Applications
As usual, let A be a finite set of actions and O be a finite set of observation. Require that for each o∈O there is σo∈Tι which we interpret as the type of states producing observation o. Denote σ∗:=∑o∈Oσo (the type of all states). Moreover, require that our language has the nonlogical symbols s0∈V0(σ∗) (the initial state) and, for each a∈A, Ka∈F0σ∗→(σ∗) (the transition kernel). Then, every model defines a (pseudocausal) infra-POMDP. This way we can use symbolic expressions to define infra-Bayesian RL hypotheses. It is then tempting to study the control theoretic and learning theoretic properties of those hypotheses. Moreover, it is natural to introduce a prior which weights those hypotheses by length, analogical to the Solomonoff prior. This leads to some sort of bounded infra-Bayesian algorithmic information theory and bounded infra-Bayesian analogue of AIXI.
When using infra-Bayesian logic to define a simplicity prior, it is natural to use “axiom circuits” rather than plain formulae. That is, when we write the axioms defining our hypothesis, we are allowed to introduce “shorthand” symbols for repeating terms. This doesn’t affect the expressiveness, but it does affect the description length. Indeed, eliminating all the shorthand symbols can increase the length exponentially.
Instead of introducing all the “algebrator” logical symbols, we can define T as the quotient by the equivalence relation defined by the algebraic laws. We then need only two extra logical atomic terms:
For any n∈N and σ∈Sn (permutation), denote n:=∑ni=11 and require σ+∈Fn→n
For any n∈N and σ∈Sn, σ×α∈Fαn→αn
However, if we do this then it’s not clear whether deciding that an expression is a well-formed term can be done in polynomial time. Because, to check that the types match, we need to test the identity of algebraic expressions and opening all parentheses might result in something exponentially long.
In the anthropic trilemma, Yudkowsky writes about the thorny problem of understanding subjective probability in a setting where copying and modifying minds is possible. Here, I will argue that infra-Bayesianism (IB) leads to the solution.
Consider a population of robots, each of which in a regular RL agent. The environment produces the observations of the robots, but can also make copies or delete portions of their memories. If we consider a random robot sampled from the population, the history they observed will be biased compared to the “physical” baseline. Indeed, suppose that a particular observation c has the property that every time a robot makes it, 10 copies of them are created in the next moment. Then, a random robot will have c much more often in their history than the physical frequency with which c is encountered, due to the resulting “selection bias”. We call this setting “anthropic RL” (ARL).
The original motivation for IB was non-realizability. But, in ARL, Bayesianism runs into issues even when the environment is realizable from the “physical” perspective. For example, we can consider an “anthropic MDP” (AMDP). An AMDP has finite sets of actions (A) and states (S), and a transition kernel T:A×S→Δ(S∗). The output is a string of states instead of a single state, because many copies of the agent might be instantiated on the next round, each with their own state. In general, there will be no single Bayesian hypothesis that captures the distribution over histories that the average robot sees at any given moment of time (at any given moment of time we sample a robot out of the population and look at their history). This is because the distributions at different moments of time are mutually inconsistent.
[EDIT: Actually, given that we don’t care about the order of robots, the signature of the transition kernel should be T:A×S→ΔNS]
The consistency that is violated is exactly the causality property of environments. Luckily, we know how to deal with acausality: using the IB causal-acausal correspondence! The result can be described as follows: Murphy chooses a time moment n∈N and guesses the robot policy π until time n. Then, a simulation of the dynamics of (π,T) is performed until time n, and a single history is sampled from the resulting population. Finally, the observations of the chosen history unfold in reality. If the agent chooses an action different from what is prescribed, Nirvana results. Nirvana also happens after time n (we assume Nirvana reward 1 rather than ∞).
This IB hypothesis is consistent with what the average robot sees at any given moment of time. Therefore, the average robot will learn this hypothesis (assuming learnability). This means that for n≫11−γ≫0, the population of robots at time n has expected average utility with a lower bound close to the optimum for this hypothesis. I think that for an AMDP this should equal the optimum expected average utility you can possibly get, but it would be interesting to verify.
Curiously, the same conclusions should hold if we do a weighted average over the population, with any fixed method of weighting. Therefore, the posterior of the average robot behaves adaptively depending on which sense of “average” you use. So, your epistemology doesn’t have to fix a particular method of counting minds. Instead different counting methods are just different “frames of reference” through which to look, and you can be simultaneously rational in all of them.
Could you expand a little on why you say that no Bayesian hypothesis captures the distribution over robot-histories at different times? It seems like you can unroll an AMDP into a “memory MDP” that puts memory information of the robot into the state, thus allowing Bayesian calculation of the distribution over states in the memory MDP to capture history information in the AMDP.
I’m not sure what do you mean by that “unrolling”. Can you write a mathematical definition?
Let’s consider a simple example. There are two states: s0 and s1. There is just one action so we can ignore it.s0 is the initial state. An s0 robot transition into an s1 robot. An s1 robot transitions into an s0 robot and an s1 robot. How will our population look like?
0th step: all robots remember s0
1st step: all robots remember s0s1
2nd step: 1⁄2 of robots remember s0s1s0 and 1⁄2 of robots remember s0s1s1
3rd step: 1⁄3 of robots remembers s0s1s0s1, 1⁄3 of robots remember s0s1s1s0 and 1⁄3 of robots remember s0s1s1s1
There is no Bayesian hypothesis a robot can have that gives correct predictions both for step 2 and step 3. Indeed, to be consistent with step 2 we must have Pr[s0s1s0]=12 and Pr[s0s1s1]=12. But, to be consistent with step 3 we must have Pr[s0s1s0]=13, Pr[s0s1s1]=23.
In other words, there is no Bayesian hypothesis s.t. we can guarantee that a randomly sampled robot on a sufficiently late time step will have learned this hypothesis with high probability. The apparent transition probabilities keep shifting s.t. it might always continue to seem that the world is complicated enough to prevent our robot from having learned it already.
Or, at least it’s not obvious there is such a hypothesis. In this example, Pr[s0s1s1]Pr[s0s1s0] will converge to the golden ratio at late steps. But, do all probabilities converge fast enough for learning to happen, in general? I don’t know, maybe for finite state spaces it can work. Would definitely be interesting to check.
[EDIT: actually, in this example there is such a hypothesis but in general there isn’t, see below]
Great example. At least for the purposes of explaining what I mean :) The memory AMDP would just replace the states s0, s1 with the memory states [s0], [s1], [s0,s0], [s0,s1], etc. The action takes a robot in [s0] to memory state [s0,s1], and a robot in [s0,s1] to one robot in [s0,s1,s0] and another in [s0,s1,s1].
(Skip this paragraph unless the specifics of what’s going on aren’t obvious: given a transition distribution P(s′∗|s,π) (P being the distribution over sets of states s’* given starting state s and policy π), we can define the memory transition distribution P(s′∗m|sm,π) given policy π and starting “memory state” sm∈S∗ (Note that this star actually does mean finite sequences, sorry for notational ugliness). First we plug the last element of sm into the transition distribution as the current state. Then for each s′∗ in the domain, for each element in s′∗ we concatenate that element onto the end of sm and collect these s′m into a set s′∗m, which is assigned the same probability P(s′∗).)
So now at time t=2, if you sample a robot, the probability that its state begins with [s0,s1,s1] is 0.5. And at time t=3, if you sample a robot that probability changes to 0.66. This is the same result as for the regular MDP, it’s just that we’ve turned a question about the history of agents, which may be ill-defined, into a question about which states agents are in.
I’m still confused about what you mean by “Bayesian hypothesis” though. Do you mean a hypothesis that takes the form of a non-anthropic MDP?
I’m not quite sure what are you trying to say here, probably my explanation of the framework was lacking. The robots already remember the history, like in classical RL. The question about the histories is perfectly well-defined. In other words, we are already implicitly doing what you described. It’s like in classical RL theory, when you’re proving a regret bound or whatever, your probability space consists of histories.
I’m still confused about what you mean by “Bayesian hypothesis” though. Do you mean a hypothesis that takes the form of a non-anthropic MDP?
Yes, or a classical RL environment. Ofc if we allow infinite state spaces, then any environment can be regarded as an MDP (whose states are histories). That is, I’m talking about hypotheses which conform to the classical “cybernetic agent model”. If you wish, we can call it “Bayesian cybernetic hypothesis”.
Also, I want to clarify something I was myself confused about in the previous comment. For an anthropic Markov chain (when there is only one action) with a finite number of states, we can give a Bayesian cybernetic description, but for a general anthropic MDP we cannot even if the number of states is finite.
Indeed, consider some T:S→ΔNS. We can take its expected value to get ET:S→RS+. Assuming the chain is communicating, ET is an irreducible non-negative matrix, so by the Perron-Frobenius theorem it has a unique-up-to-scalar maximal eigenvector η∈RS+. We then get the subjective transition kernel:
ST(t∣s)=ET(t∣s)ηt∑t′∈SET(t′∣s)ηt′
Now, consider the following example of an AMDP. There are three actions A:={a,b,c} and two states S:={s0,s1}. When we apply a to an s0 robot, it creates two s0 robots, whereas when we apply a to an s1 robot, it leaves one s1 robot. When we apply b to an s1 robot, it creates two s1 robots, whereas when we apply b to an s0 robot, it leaves one s0 robot. When we apply c to any robot, it results in one robot whose state is s0 with probability 12 and s1 with probability 12.
Consider the following two policies.πa takes the sequence of actions cacaca… and πb takes the sequence of actions cbcbcb…. A population that follows πa would experience the subjective probability ST(s0∣s0,c)=23, whereas a population that follows πb would experience the subjective probability ST(s0∣s0,c)=13. Hence, subjective probabilities depend on future actions. So, effectively anthropics produces an acausal (Newcomb-like) environment. And, we already know such environments are learnable by infra-Bayesian RL agents and, (most probably) not learnable by Bayesian RL agents.
Ah, okay, I see what you mean. Like how preferences are divisible into “selfish” and “worldly” components, where the selfish component is what’s impacted by a future simulation of you that is about to have good things happen to it.
(edit: The reward function in AMDPs can either be analogous to “wordly” and just sum the reward calculated at individual timesteps, or analogous to “selfish” and calculated by taking the limit of the subjective distribution over parts of the history, then applying a reward function to the expected histories.)
I brought up the histories->states thing because I didn’t understand what you were getting at, so I was concerned that something unrealistic was going on. For example, if you assume that the agent can remember its history, how can you possibly handle an environment with memory-wiping?
In fact, to me the example is still somewhat murky, because you’re talking about the subjective probability of a state given a policy and a timestep, but if the agents know their histories there is no actual agent in the information-state that corresponds to having those probabilities. In an MDP the agents just have probabilities over transitions—so maybe a clearer example is an agent that copies itself if it wins the lottery having a larger subjective transition probability of going from gambling to winning. (i.e. states are losing and winning, actions are gamble and copy, the policy is to gamble until you win and then copy).
Ah, okay, I see what you mean. Like how preferences are divisible into “selfish” and “worldly” components, where the selfish component is what’s impacted by a future simulation of you that is about to have good things happen to it.
...I brought up the histories->states thing because I didn’t understand what you were getting at, so I was concerned that something unrealistic was going on. For example, if you assume that the agent can remember its history, how can you possibly handle an environment with memory-wiping?
AMDP is only a toy model that distills the core difficulty into more or less the simplest non-trivial framework. The rewards are “selfish”: there is a reward function r:(S×A)∗→R which allows assigning utilities to histories by time discounted summation, and we consider the expected utility of a random robot sampled from a late population. And, there is no memory wiping. To describe memory wiping we indeed need to do the “unrolling” you suggested. (Notice that from the cybernetic model POV, the history is only the remembered history.)
For a more complete framework, we can use an ontology chain, but (i) instead of A×O labels use A×M labels, where M is the set of possible memory states (a policy is then described by π:M→A), to allow for agents that don’t fully trust their memory (ii) consider another chain with a bigger state space S′ plus a mapping p:S′→NS s.t. the transition kernels are compatible. Here, the semantics of p(s) is: the multiset of ontological states resulting from interpreting the physical state s by taking the viewpoints of different agents s contains.
In fact, to me the example is still somewhat murky, because you’re talking about the subjective probability of a state given a policy and a timestep, but if the agents know their histories there is no actual agent in the information-state that corresponds to having those probabilities.
I didn’t understand “no actual agent in the information-state that corresponds to having those probabilities”. What does it mean to have an agent in the information-state?
Consider a one-shot decision theory setting. There is a set of unobservable states S, a set of actions A and a reward function r:A×S→[0,1]. An IBDT agent has some belief β∈□S[1], and it chooses the action a∗:=argmaxa∈AEβ[λs.r(a,s)].
We can construct an equivalent scenario, by augmenting this one with a perfect predictor of the agent (Omega). To do so, define S′:=A×S, where the semantics of (p,s) is “the unobservable state is s and Omega predicts the agent will take action p”. We then define r′:A×S′→[0,1] by r′(a,p,s):=1a=pr(a,s)+1a≠p and β′∈□S′ by Eβ′[f]:=minp∈AEβ[λs.f(p,s)] (β′ is what we call the pullback of β to S′, i.e we have utter Knightian uncertainty about Omega). This is essentially the usual Nirvana construction.
The new setup produces the same optimal action as before. However, we can now give an alternative description of the decision rule.
For any p∈A, define Ωp∈□S′ by EΩp[f]:=mins∈Sf(p,s). That is, Ωp is an infra-Bayesian representation of the belief “Omega will make prediction p”. For any u∈[0,1], define Ru∈□S′ by ERu[f]:=minμ∈ΔS′:Eμ[r(p,s)]≥uEμ[f(p,s)]. Ru can be interpreted as the belief “assuming Omega is accurate, the expected reward will be at least u”.
We will also need to use the order ⪯ on □X defined by: ϕ⪯ψ when ∀f∈[0,1]X:Eϕ[f]≥Eψ[f]. The reversal is needed to make the analogy to logic intuitive. Indeed, ϕ⪯ψ can be interpreted as ”ϕ implies ψ“[2], the meet operator ∧ can be interpreted as logical conjunction and the join operator ∨ can be interpreted as logical disjunction.
Claim:
a∗=argmaxa∈Amax{u∈[0,1]∣β′∧Ωa⪯Ru}
(Actually I only checked it when we restrict to crisp infradistributions, in which case ∧ is intersection of sets and ⪯ is set containment, but it’s probably true in general.)
Now, β′∧Ωa⪯Ru can be interpreted as “the conjunction of the belief β′ and Ωa implies Ru”. Roughly speaking, “according to β′, if the predicted action is a then the expected reward is at least u”. So, our decision rule says: choose the action that maximizes the value for which this logical implication holds (but “holds” is better thought of as “is provable”, since we’re talking about the agent’s belief). Which is exactly the decision rule of MUDT!
Apologies for the potential confusion between □ as “space of infradistrubutions” and the □ of modal logic (not used in this post). ↩︎
Technically it’s better to think of it as ”ψ is true in the context of ϕ”, since it’s not another infradistribution so it’s not a genuine implication operator. ↩︎
I spent most of it going through things, throwing away, organizing, sorting, and packing said things depending on what they were, and got a lot done in preparation for moving because of that. I’m looking forward to finishing up my resume tomorrow and getting feedback on it then finishing up my profile on the job sites I made an account on.
I’ve enjoyed watching the most recent season (part 3 IIRC) of Disenchantment as well, and apparently The Magicians has a new season too, exciting!
Is breadth of knowledge, depth of knowledge or applicability of knowledge more important? Whatever the answer is, how do you more dakka the hell out of it?
(Well, a related argument anyway. WBE is about scanning and simulating the brain rather than understanding it, but I would make a similar argument using “hard-to-scan” and/or “hard-to-simulate” things the brain does, rather than “hard-understand” things the brain does, which is what I was nominally blogging about. There’s a lot of overlap between those anyway; the examples I put in mostly work for both.)
I remember reading a Zvi Mowshowitz post in which he says something like “if you have concluded that the most ethical thing to do is to destroy the world, you’ve made a mistake in your reasoning somewhere.”
I spent some time search around his blog for that post, but couldn’t find it. Does anyone know what I’m talking about?
Alright, you lost the double-crux. Now go ahead and bite these bullets. There you g—nope. Oh nonononono ahahahaha no. I’m not handing them to you. No, I’m going to hold them out and you’re going to munch on them out of my hand like a docile horse. There we go, much better.
Double cruxes aren’t supposed to be something you win or lose, as I understand it—a double crux is a collaborative effort to help both parties arrive at a better understanding of the truth. It’s problematic when admitting that you’re wrong, and changing your mind is called “losing”
Shortform #21 Functional strength training and job hunting, oh my.
I had a most excellent day :)
I created accounts on job posting sites and started hunting.
Joined a discord video call with two friends and we did 30 minutes of functional strength training together, I am now really sore, but am happy I worked out!
I did virtual co-working for ~3 hours.
My resume is out of date and pretty bad, I’ll fix it up tomorrow using RMarkdown and other nice R things so that my newly created resume will be up to date AND pretty / well styled. I’m meeting (virtually) with a friend on Saturday who runs career building and resume workshops and they have graciously agreed to review and give me feedback on the newly created resume. Thank you to them!
Once I have a new and up to date resume, I can add that to all the job sites I signed up at and finish making + polishing my profile on all of them.
I currently run my website on an AWS Lightsail instance with Wordpress as the CMS. I don’t think that’s working for me, and the website isn’t paying rent design-wise, content-wise, nor financially (though it is really really really cheap to operate, so I’m not losing much). So, in addition to LW2019Review writing, I’m going to make time (that doesn’t subtract from job hunting time) to redo my website and axe Wordpress as my CMS since I don’t like it. Using a static-site generator and adding a little bit of custom stuff (I really like the functionality and design of Gwern’s website so I will steal inspiration from there) will probably result in a much nicer looking, easier to manage, and more functional (for what I care about) site, so I’ll do those things.
A weird side effect of job hunting today has been a really strong desire to code. Guess I’ll be doing much more of that going forward.
In a bayesian rationalist view of the world, we assign probabilities to statements based on how likely we think they are to be true. But truth is a matter of degree, as Asimov points out. In other words, all models are wrong, but some are less wrong than others.
Consider, for example, the claim that evolution selects for reproductive fitness. Well, this is mostly true, but there’s also sometimes group selection, and the claim doesn’t distinguish between a gene-level view and an individual-level view, and so on...
So just assigning it a single probability seems inadequate. Instead, we could assign a probability distribution over its degree of correctness. But because degree of correctness is such a fuzzy concept, it’d be pretty hard to connect this distribution back to observations.
Or perhaps the distinction between truth and falsehood is sufficiently clear-cut in most everyday situations for this not to be a problem. But questions about complex systems (including, say, human thoughts and emotions) are messy enough that I expect the difference between “mostly true” and “entirely true” to often be significant.
Has this been discussed before? Given Less Wrong’s name, I’d be surprised if not, but I don’t think I’ve stumbled across it.
This feels generally related to the problems covered in Scott and Abram’s research over the past few years. One of the sentences that stuck out to me the most was (roughly paraphrased since I don’t want to look it up):
In order to be a proper bayesian agent, a single hypothesis you formulate is as big and complicated as a full universe that includes yourself
I.e. our current formulations of bayesianism like solomonoff induction only formulate the idea of a hypothesis at such a low level that even trying to think about a single hypothesis rigorously is basically impossible with bounded computational time. So in order to actually think about anything you have to somehow move beyond naive bayesianism.
This seems reasonable, thanks. But I note that “in order to actually think about anything you have to somehow move beyond naive bayesianism” is a very strong criticism. Does this invalidate everything that has been said about using naive bayesianism in the real world? E.g. every instance where Eliezer says “be bayesian”.
One possible answer is “no, because logical induction fixes the problem”. My uninformed guess is that this doesn’t work because there are comparable problems with applying to the real world. But if this is your answer, follow-up question: before we knew about logical induction, were the injunctions to “be bayesian” justified?
(Also, for historical reasons, I’d be interested in knowing when you started believing this.)
I think it definitely changed a bunch of stuff for me, and does at least a bit invalidate some of the things that Eliezer said, though not actually very much.
In most of his writing Eliezer used bayesianism as an ideal that was obviously unachievable, but that still gives you a rough sense of what the actual limits of cognition are, and rules out a bunch of methods of cognition as being clearly in conflict with that theoretical ideal. I did definitely get confused for a while and tried to apply Bayes to everything directly, and then felt bad when I couldn’t actually apply bayes theorem in some situations, which I now realize is because those tended to be problems where embededness or logical uncertainty mattered a lot.
My shift on this happened over the last 2-3 years or so. I think starting with Embedded Agency, but maybe a bit before that.
rules out a bunch of methods of cognition as being clearly in conflict with that theoretical ideal
Which ones? In Against Strong Bayesianism I give a long list of methods of cognition that are clearly in conflict with the theoretical ideal, but in practice are obviously fine. So I’m not sure how we distinguish what’s ruled out from what isn’t.
which I now realize is because those tended to be problems where embededness or logical uncertainty mattered a lot
Can you give an example of a real-world problem where logical uncertainty doesn’t matter a lot, given that without logical uncertainty, we’d have solved all of mathematics and considered all the best possible theories in every other domain?
I think in-practice there are lots of situations where you can confidently create a kind of pocket-universe where you can actually consider hypotheses in a bayesian way.
Concrete example: Trying to figure out who voted a specific way on a LW post. You can condition pretty cleanly on vote-strength, and treat people’s votes as roughly independent, so if you have guesses on how different people are likely to vote, it’s pretty easy to create the odds ratios for basically all final karma + vote numbers and then make a final guess based on that.
It’s clear that there is some simplification going on here, by assigning static probabilities for people’s vote behavior, treating them as independent (though modeling some subset of independence wouldn’t be too hard), etc.. But overall I expect it to perform pretty well and to give you good answers.
(Note, I haven’t actually done this explicitly, but my guess is my brain is doing something pretty close to this when I do see vote numbers + karma numbers on a thread)
So I’m not sure how we distinguish what’s ruled out from what isn’t.
Well, it’s obvious that anything that claims to be better than the ideal bayesian update is clearly ruled out. I.e. arguments that by writing really good explanations of a phenomenon you can get to a perfect understanding. Or arguments that you can derive the rules of physics from first principles.
There are also lots of hypotheticals where you do get to just use Bayes properly and then it provides very strong bounds on the ideal approach. There are a good number of implicit models behind lots of standard statistics models that when put into a bayesian framework give rise to a more general formulation. See the Wikipedia article for “Bayesian interpretations of regression” for a number of examples.
Of course, in reality it is always unclear whether the assumptions that give rise to various regression methods actually hold, but I think you can totally say things like “given these assumption, the bayesian solution is the ideal one, and you can’t perform better than this, and if you put in the computational effort you will actually achieve this performance”.
Hmmm, but what does this give us? He talks about the difference between vague theories and technical theories, but then says that we can use a scoring rule to change the probabilities we assign to each type of theory.
But my question is still: when you increase your credence in a vague theory, what are you increasing your credence about? That the theory is true?
Nor can we say that it’s about picking the “best theory” out of the ones we have, since different theories may overlap partially.
If we can quantify how good a theory is at making accurate predictions (or rather, quantify a combination of accuracy and simplicity), that gives us a sense in which some theories are “better” (less wrong) than others, without needing theories to be “true”.
(This is a basic point on conjunctions, but I don’t recall seeing its connection to Occam’s razor anywhere)
When I first read Occam’s Razorback in 2017, it seemed to me that the essay only addressed one kind of complexity: how complex the laws of physics are. If I’m not sure whether the witch did it, the universes where the witch did it are more complex, and so these explanations are exponentially less likely under a simplicity prior. Fine so far.
But there’s another type. Suppose I’m weighing whether the United States government is currently engaged in a vast conspiracy to get me to post this exact comment? This hypothesis doesn’t really demand a more complex source code, but I think we’d say that Occam’s razor shaves away this hypothesis anyways—even before weighing object-level considerations. This hypothesis is complex in a different way: it’s highly conjunctive in its unsupported claims about the current state of the world. Each conjunct eliminates many ways it could be true, from my current uncertainty, and so should I deem it correspondingly less likely.
I agree with the principle but I’m not sure I’d call it “Occam’s razor”. Occam’s razor is a bit sketchy, it’s not really a guarantee of anything, it’s not a mathematical law, it’s like a rule of thumb or something. Here you have a much more solid argument: multiplying many probabilities into a conjunction makes the result smaller and smaller. That’s a mathematical law, rock-solid. So I’d go with that...
My point was more that “people generally call both of these kinds of reasoning ‘Occam’s razor’, and they’re both good ways to reason, but they work differently.”
Oh, hmm, I guess that’s fair, now that you mention it I do recall hearing a talk where someone used “Occam’s razor” to talk about the solomonoff prior. Actually he called it “Bayes Occam’s razor” I think. He was talking about a probabilistic programming algorithm.
That’s (1) not physics, and (2) includes (as a special case) penalizing conjunctions, so maybe related to what you said. Or sorry if I’m still not getting what you meant
The structure of knowledge is an undirected cyclic graph between concepts. To make it easier to present to the novice, experts convert that graph into a tree structure by removing some edges. Then they convert that tree into natural language. This is called a textbook.
Scholarship is the act of converting the textbook language back into nodes and edges of a tree, and then filling in the missing edges to convert it into the original graph.
The mind cannot hold the entire graph in working memory at once. It’s as important to practice navigating between concepts as learning the concepts themselves. The edges are as important to the structure as the nodes. If you have them all down pat, then you can easily get from one concept to another.
It’s not always necessary to memorize every bit of knowledge. Part of the graph is knowing which things to memorize, which to look up, and where to refer to if you need to look something up.
Feeling as though you’ve forgotten is not easily distinguishable from never having learned something. When people consult their notes and realize that they can’t easily call to mind the concepts they’re referencing, this is partly because they’ve never practiced connecting the notes to the concepts. There are missing edges on the graph.
I began organizing and packing up in preparation for moving.
It was pointed out to me that I keep working on a bunch of different things but haven’t yet started searching for jobs, despite that finding a good job will enable me to move to Seattle and do more fun things in life. Point noted and taken to heart!
Job hunting is now my highest priority, and I will be focusing on that exclusively while virtually co-working plus will do that while doing productive stuff by myself too. I will continue writing my three reviews (for the LW2019Review) during non-workday hours / in my spare time, but my workday hours will be focused on job hunting.
An acquaintance recently started a FB post with “I feel like the entire world has gone mad.”
My acquaintance was maybe being a bit humorous; nevertheless, I was reminded of this old joke:
As a senior citizen was driving down the freeway, his car phone rang. Answering, he heard his wife’s voice urgently warning him, “Herman, I just heard on the news that there’s a car going the wrong way on 280. Please be careful!”
”Hell,” said Herman, “It’s not just one car. It’s hundreds of them!”
I guess it’s my impression that a lot of people have the “I feel large chunks of the world have gone mad” thing going, who didn’t have it going before (or not this much or this intensely). (On many sides, and not just about the Blue/Red Trump/Biden thing.) I am curious whether this matches others’ impressions. (Or if anyone has studies/polls/etc. that might help with this.)
Separately but relatedly, I would like to be on record as predicting that the amount of this (of people feeling that large numbers of people are totally batshit on lots of issues) is going to continue increasing across the next several years. And is going to spread further beyond a single axis of politicization, to happen almost everywhere.
I’m very open to bets on this topic, if anybody has a suitable operationalization.
I’m also interested in thinking on what happens next, if a very large increase of this sort does occur.
One of the most important things going on right now, that people aren’t paying attention to: Kevin Buzzard is (with others) formalizing the entire undergraduate mathematics curriculum in Lean. (So that all the proofs will be formally verified.)
Sorry for the stupid question, and I liked the talk and agree it’s a really neat project, but why is it so important? Do you mean important for math, or important for humanity / the future / whatever?
Mostly it just seems significant in the grand scheme of things. Our mathematics is going to become formally verified.
In terms of actual consequences, it’s maybe not so important on its own. But putting a couple pieces together (this, Dan Selsam’s work, GPT), it seems like we’re going to get much better AI-driven automated theorem proving, formal verification, code generation, etc relatively soon.
I’d expect these things to start meaningfully changing how we do programming sometime in the next decade.
Yeah, I get some aesthetic satisfaction from math results being formally verified to be correct. But we could just wait until the AGIs can do it for us… :-P
Yeah, it would be cool and practically important if you could write an English-language specification for a function, then the AI turns it into a complete human-readable formal input-output specification, and then the AI also writes code that provably meets that specification.
I don’t have a good sense for how plausible that is—I’ve never been part of a formally-verified software creation project. Just guessing, but the second part (specification → code) seems like the kind of problem that AIs will solve in the next decade. Whereas the first part (creating a complete formal specification) seems like it would be the kind of thing where maybe the AI proposes something but then the human needs to go back and edit it, because you can’t get every detail right unless you understand the whole system that this function is going to be part of. I dunno though, just guessing.
AI generates test cases for its candidate functions, and computes their results
AI formally analyzes its candidate functions and looks for simple interesting guarantees it can make about their behavior
AI displays its candidate functions to the user, along with a summary of the test results and any guarantees about the input output behavior, and the user selects the one they want (which they can also edit, as necessary)
In this version, you go straight from English to code, which I think might be easier than from English to formal specification, because we have lots of examples of code with comments. (And I’ve seen demos of GPT-3 doing it for simple functions.)
I think some (actually useful) version of the above is probably within reach today, or in the very near future.
It seems to me that months ago, we should have been founding small villages or towns that enforce contact tracing and required quarantines, both for contacts of people who are known to have been exposed, and for people coming in from outside the bubble. I don’t think this is possible in all states, but I’d be surprised if there was no state where this is possible.
I think it’d be much simpler to find the regions/towns doing this, and move there. Even if there’s no easy way to get there or convince them to let you in, it’s likely STILL more feasible than setting up your own.
If you do decide to do it yourself, why is a village or town the best unit? It’s not going to be self-sufficient regardless of what you do, so why is a town/village better than an apartment building or floor (or shared- or non-shared house)?
In any case, if this was actually a good idea months ago, it probably still is. Like planting a tree, the best time to do it is 20 years ago, and the second-best time is now.
Are there any areas in the states doing this? I would go to NZ or South Korea, but getting there is a hassle compared to going somewhere in the states. Regarding size, it’s not about self-sufficiency, but rather being able to interact in a normal way with other people around me without worrying about the virus, so the more people involved the better
On an individual basis, I definitely agree. Acting alone, it would be easier for me to personally move to NZ or SK than to found a new city. However, from a collective perspective (and if the LW community isn’t able to cordinate collective action, then it has failed), if a group of 50 − 1000 people all wanted to live in a place with sane precautions, and were willing to put in effort, creating a new town in the states will scale better (moving countries has effort scaling linearly with magnitude of population flux, while founding a town scales less than linearly)
if the LW community isn’t able to cordinate collective action, then it has failed
Oh, we’re talking about different things. I don’t know much about any “LW community”, I just use LW for sharing information, models, and opinions with a bunch of individuals. Even if you call that a “community”, as some do, it doesn’t coordinate any significant collective action. I guess it’s failed?
Sorry, I don’t think I suceeded at speaking with clarity there. The way you use LW is perfectly fine and good.
My view of LW is that it’s a site dedicated to rationality, both epistemic and instrumental. Instrumental rationality is, as Eliezer likes to call it, “the art of winning”. The art of winning often calls for collective action to achieve the best outcomes, so if collective action never comes about, then that would indicate a failure of instrumental rationality, and thereby a failure of the purpose of LW.
LW hasn’t failed. While I have observed some failures of the collective userbase to properly engage in collective action to the fullest extent, I find it does often succeed in creating collective action, often thanks to the deliberate efforts of the LW team.
Fair enough, and I was a bit snarky in my response. I still have to wonder, if it’s not worth the hassle for a representative individual to move somewhere safer, why we’d expect it’s worth a greater hassle (both individually and the coordination cost) to create a new town. Is this the case where rabbits are negative value so stags are the only option (reference: https://www.lesswrong.com/posts/zp5AEENssb8ZDnoZR/the-schelling-choice-is-rabbit-not-stag)? I’d love to see some cost/benefit estimates to show that it’s even close to reasonable, compared to just isolating as much as possible individually.
I think you’re omitting constant factors from your analysis; founding a town is so, so much work. How would you even run out utilities to the town before the pandemic ended?
I acknowledge that I don’t know how the effort needed to found a livable settlement compares to the effort needed to move people from the US to a Covid-good country. If I knew how many person-hours each of these would take, it would be easier for me to know whether or not my idea doesn’t make sense.
Announcing the release of Hissp 0.2.0, my Lisp to Python transpiler, now available on PyPI.
I’ve overhauled the documentation with a new quick start in the style of Learn X in Y minutes, and a new macro tutorial, among other things.
New features include raw strings, module literals, unqualified reader macros, escape sequences in symbols and improvements to the basic macros.
I suggest putting those links inside those links. For example, on the github page, changing:
to
Epistemic status: Neither unique nor surprising, but something I felt like idly cataloguing.
An interesting example of statistical illiteracy in the field: This complaint thread about the shuffling algorithm on Magic: the Gathering Arena, a digital version of the card game. Thousands of unique players seem to be represented here.
MTG players who want to win games have a strong incentive to understand basic statistics. Players like Frank Karsten have been working for years to explain the math behind good deckbuilding. And yet, the “rigged shuffler” is a persistent belief even among reasonably engaged players; I’ve seen quite a few people try to promote it on my stream, which is not at all aimed at beginners.
(The shuffler is, of course, appropriately random, save for some “hand smoothing” in best-of-one matches to increase the chance of a “normal” draw.)
A few quotes from the thread:
(No one ever seems to think the shuffler is rigged in their favor.)
(People who play in live tournaments are much better at deckbuilding, leading to fewer bad draws. Still, one recent major tournament was infamously decided by a player’s atrocious draw in the last game of the finals.)
(Many people have only played MTG as a paper game when they come to Arena. In paper, it’s very common for people to “cheat” when shuffling by sorting their initial deck in a particular way, even with innocuous intent. When people are exposed to true randomness, they often can’t tolerate it.)
Other common conspiracy theories about Arena:
“Rigged matchmaking” (the idea that the developers somehow know which decks will be good against your deck, and ensure that you are matched up against it; again, I never see this theory in reverse)
“Poker hands” (the idea that people get multiple copies of a card more often than would be expected)
“50% bias” (the idea that the game arranges good/bad draws to keep players at a 50% win rate; admirably, these players recognize that they do draw well sometimes, but they don’t understand what it means to be in the middle of a binomial distribution)
Players of Battle for Wesnoth often accuse random number generator of being broken, e.g. when their unit has 3 attacks, each of them has independently 70% chance to hit, and all three attacks happen to miss. But the chance of that happening is actually 2,7%, and if a level takes twenty or more turns, and in each turn several units attack, this is likely to happen several times per level.
2021 Week 2 Review 10 Jan − 16 Jan: Co-Working FTW! Also...Stop Signalling, WTB Quantitative Data and Reasoning!
I recall enjoying last week, but wow did I endorse each day strongly in my shortforms. It is true that last week was substantially better than the week prior to it, and I don’t recall nor have records of bad things or bad feels occurring last week, so I’ll stand by my strong endorsements of the days last week.
Thus, what an excellent week!
Re, Stop signalling: “I spent most of it going through things, throwing away, organizing, sorting, and packing said things depending on what they were, and got a lot done in preparation for moving because of that. I’m looking forward to finishing up my resume tomorrow and getting feedback on it then finishing up my profile on the job sites I made an account on.” (from 15 Jan shortform)
Without quantitative data backing up what was said above, I effectively signalled for that entire shortform and the truth of that day was obscured. Really I only did 2-3 hours (memory estimation, don’t trust it much) of work and spent the rest of the day occupied with watching television shows or being social. It’s okay to have a lazy day from time to time, but this wasn’t a designated rest or “sabbath” day, plus, I obscured the truth by signalling (one could also say “employing rhetoric”, potentially). I’ve done a bit of signalling in other posts throughout the shortforms, but this one was the worst yet.
Why I care so much about not signalling, being truthful, being quantitative: These shortforms are not just me howling into the void about my life, I’m trying to improve myself and my life! I write these shortforms so that I have data from each day to reflect on plus use in aiding memory / recall, am somewhat publicly accountable, and keep track of what my explicit goals are and how well I do at making progress to or achieving them. I need more data (and accurate data!) about my own life and actions so that I can become a more effective person! Signalling and obscuring the truth are antithetical to what I’m trying to do and who I want to be, so I’ll stop that nonsense immediately. Any remaining signalling will be the kind of noise and signalling you either can’t get rid of because we’re tribal animals, us human beings, or will be based on truthful quantitative data and thus an endorsement of particular actions. In short, be truthful and quantitative or be square AND unhelpful to ones own self.
How I will be obtaining more (and more accurate) data about my actions and my life:
I don’t have an internal clock that’s consciously accessible to me plus I rarely experience the feeling of time passing, and am notoriously bad at noticing how much time it takes me to do something, the passage of time itself, and timeliness.
Using gtimelog on my desktop lets me keep an accurate time log of what I do and how long it takes me to do. Downside: everything must be manually entered. Upside: it’s really simple and easy to use, plus I’ve gotten good at both remembering to enter things and at using the software itself. Intervention: create twice daily repeating reminders on my phone that will say: “Did you do the time log? Go do the time log” and will occur in the early afternoon and late evening.
Wearing a watch helps me observe the time when I’m up and moving around, so I’ll commit to wearing my watch more except on rest / sabbath days. My watch has stopwatch, timer, and alarm apps so I can use those to aid in time-related things as well. I’ll check its app store to see if there’s a decent time-log app as well.
I will experiment next week with setting alarms at different time intervals (e.g. I’ll try setting a “hey check the time” alarm to go off once every 2 hours initially) to see if that helps me be more cognizant of both time passing plus the actions I’m taking or not taking during that time.
I will take some time this next week to search for time log apps that work with the platforms I use (Linux [primary], Firefox, macOS, and iOS mostly) and see what options are out there.
I will use a notes app on my phone and carry around a notepad so that I can jot down whatever it is I’m working on or doing at any given moment.
I’ll find a calorie counting app and actually use the damn thing. I’ve always found doing this particularly tedious and annoying, but there’s no getting around calorie counting if I want to be effective at accomplishing my weight loss goals.
In addition to the above methods, I will be actively searching for more options that help with this endeavour and try to quantify even more parts of my life. Any suggestions?
Last week I started virtually co-working and did 3 or 4 sessions, for about 6 hours in total. I’m pushing for 10 hours of virtual co-working next week, time permitting (I am in the process of packing and getting read to move, so...things might become real chaotic real fast). I’ll establish regularly scheduled sessions that repeat, should help with consistency over the long term.
My main goals for the week:
Look for software dev /eng jobs, preferably fully remote
Practise coding everyday, in particular, practise algorithms, architecture-building, and data structures.
Pack and get ready for moving
Get vaccinated
Continue doing the several things I’ve been doing either since late December or have recently identified as good for me.
writing shortforms, weekly reviews, etc.
exercising daily; focus on strength training over cardio now, but still do some cardio
be virtually social each day
virtual co-working!
calorie counting
time logging
Here’s to another great week! What are you working on and trying to accomplish?
I listened to Wlad Roerich’s Background Mode 0.1 while writing this. It took me 60 minutes (1 hour) to write this weekly review and then publish it.
Be well!
Cheers,
Willa
An Alignment Paradox: Experience from firms shows that higher levels of delegation work better (high level meaning fewer constraints for the agent). This is also very common practical advice for managers. I have also received this advice myself and seen this work in practice. There is even a management card game for it: Delegation Poker. This seems to be especially true in more unpredictable environments. Given that we have intelligent agents giving them higher degrees of freedom seems to imply more ways to cheat, defect, or ‘escape’. Even more so in environments that can be controlled to lesser degrees. How can that be true? What is making this work and can some underlying principle be found that would allow this to be applied to AI?
Most people are naturally pro-social. (No, this can’t be applied to AI.) Given a task, they will try to do it well, especially if they feel like their results are noticed and appreciated.
A cynical hypothesis is that most of the things managers do are actively harmful to the project; they are interfering with the employees trying to do their work. The less the manager does, the better the chances of the project. “Delegation” is simply when manager stops actively hurting the project and allows others to do their best.
The reason for this is that most of the time, there is no actually useful work for the manager. The sane thing would be to simply sit down and relax, and wait for another opportunity for useful intervention to arise. Unfortunately, this is not an option, because doing this would most likely get the manager fired. Therefore managers create bullshit work for themselves. Unfortunately, by the nature of their work, this implies creating bullshit work for others. In addition to this, we have the corrupted human hardware, with some managers enjoying power trips and/or believing they know everything better than people below them in the hierarchy.
When you create a manager role in your company, it easily becomes a lost purpose after the original problems are solved but the manager wants to keep their job.
Check.
Check.
I don’t like cynical views and while I have encountered politics and seen such cases I don’t think that paints a realistic view. But I will run with your cynical view and you won’t like it ;-)
So we have these egotistical managers that only want to keep their job and raise in ranks. Much closer to non-social AI, right? How come more delegation works better for them too?
Mind you, I might be wrong and it works less and less the further up you go. It might be that you are right and this works only because people have enough social behavior hard-wired that makes delegation work.
But I have another theory: Limited processing capacity + Peter Principle.
It makes sense to delegate more—especially in unpredictable environments—because that reduces your processing load of dealing with all the challenging tasks and moves it to your subordinates. This leaves less capacity for them to schema against you and gives you the capacity to scheme against your superior. Und so up the chain. Capable subordinates that can deal with all the stuff you throw at them have to be promoted so they have more work to do until they reach capacity too. So sometimes the smart move is to refuse promotion :-)
I guess we agree that limited processing capacity means that interfering with the work of your underlings—assuming they are competent and spending enough of their processing capacity on their tasks—is probably a bad move. It means taking the decision away from the person who spends 8 hours a day thinking about the problem, and assigning it to a person who spent 30 seconds matching the situation to the nearest cliche, because that’s all they had time for between the meetings.
This might work if the person is such a great expert that their 30 seconds are still extremely valuable. That certainly is possible; someone with lots of experience might immediately recognize a frequently-made mistake. It is also is the kind of assumption that Dunning and Kruger would enjoy researching.
That would make sense. When you are a lowest-level manager, if you stop interfering, it allows the people at the bottom to focus on their object-level tasks. But if you are a higher-level manager, how you interact with the managers below you does not have a direct impact on the people at the bottom. Maybe you manage your underlings less, and they copy your example and give more freedom to the people at the bottom… or maybe you just gave them more time to interfere.
So you have more time to scheme… but you have to stay low in the pyramid. Not sure what you scheme about then. (Trying to get to the top in one huge jump? Sounds unlikely.)
Have you ever managed or worked closely with great team-leads?
I was a team leader twice. The first time it happened by accident. There was a team leader, three developers (me one of them), and a small project was specified. On the first day, something very urgent happened (I don’t remember what), the supposed leader was re-assigned to something else, and we three were left without supervision for unspecified time period. Being the oldest and most experienced person in the room, I took initiative and asked: “so, guys, as I see it, we use an existing database, so what needs to be done is: back-end code, front-end code, and some stylesheets; anyone has a preference which part he would like to do?” And luckily, each of us wanted to do a different part. So the work was split, we agreed on mutual interfaces, and everyone did his part. It was nice and relaxed environment: everyone working alone at their own speed, debating work only as needed, and having some friendly work-unrelated chat during breaks.
In three months we had the project completed; everyone was surprised. The company management assumed that we will only “warm up” during those three months, and when the original leader returns, he will lead us to the glorious results. (In a parallel Everett branch, where he returned shortly before we finished the product, I wonder whether he got a bonus and promotion.) Then everything returned to normal: more micromanagement, lower productivity, people burned out.
The second time, we were a small group working together for some time already. Then our manager quit. No one knew who would get the role next, and in an attempt to deflect a possible danger, I volunteered to do it on top of my usual work. What happened was that everyone worked exactly the same as they did before, only without the interruptions and extra stress caused by management, and I got some extra paperwork which I gradually reduced to minimum. The work progressed so well—no problems, no complaints from users, the few tickets we got almost always turned out to be a problem outside our project—that higher management concluded that there is apparently too litle work to do on our project, so the team members were assigned to also work on extra projects in parallel.
Perhaps my short experience is not representative, but it suggests that a manager, merely by not existing, could already create a top-decile work environment in terms of both work satisfaction and productivity. The recommended mantra to recite every day is: “first, do no harm”. My experience also suggests that this approach will ultimately get punished, despite the increased productivity: the expected outcome is more work for no pay raise until you break, or just being told to return to the old ways without any explanation why. I assume I am missing some crucial maze-navigating skills; for someone trying to be a professional manager this would be fatal; luckily I do not have this ambition.
It is quite possible that this approach only works when there is a good team: in both cases I worked with people who were nice above average. If you had a dominant asshole in the team, this could easily become a disaster: the power vacuum left by a passive manager would simply be replaced by an ambitious alpha male, who would probably soon be promoted into the role of formal leader. So perhaps the companies play it safe by using a widely applicable strategy that happens to be inferior in the case of good employees who also happen to be good people; quite likely this is because the companies simply cannot recognize such people.
Is there a leadership level beyond this? Sure, but in my quarter century of career I have only met such manager once. What he did was basically meeting each of his people once a day in the morning (this was long before I heard about “daily standups” and such) and talking with him for 5 or 10 minutes; with each team member separately, in the manager’s room. He asked the usual questions “what did you do yesterday?”, “what is your plan for today?”, “are there any obstacles to your work?”, but there was zero judgment, even if you said things like “yesterday I had a really bad day, I tried some things but at the end it was wrong and I had to throw it all away, so today I am starting from scratch again”; essentially he treated you like an adult person and assumed that whatever you did, there was a good reason for that. Before and after the report, a very short small talk; it helped that he was extremely intelligent and charismatic, so for many people this was the best part of the day. Also, the obstacles in work that you mentioned, he actually did something about them during the day, and always reported the outcome to you the next morning. Shortly, for the first and the last time so far in my career, I had a regular feeling that someone listens to me and cares about what I do (as opposed to just whipping me to run faster in order to meet arbitrary deadlines, randomly interrupting me for no good reason, second-guessing my expert opinion, etc.).
So yes, there is a level beyond “not doing harm” and it is called “actually motivating and helping”, but I guess most managers dramatically overestimate their ability to do it… and when they try regardless, and ignore the feedback, they actively do harm.
Thank you a lot. Your detailed account really helps me understand your perspective much better now. I can relate to your experience in teams where micromanagement slows things down and prevents actually relevant solutions. I have been in such teams. I can also relate to it being advantageous when a leader of questionable value is absent. I have been in such a team too—though it didn’t have such big advantages as in your case. That was mostly because this team was part of a bigger organization and platform where multiple teams had to work together to something done, e.g. agree on interfaces with other teams. And in the absence of clear joint goals that didn’t happen. Now you could argue that then the management one level up was not doing its job well and I agree. But the absence of that management wouldn’t have helped either—it could have led to a) each team trying to solve some part of the problem. It could have led to b) some people from both teams getting together and agreeing on interfaces and joining goals or it could have led to c) the teams agreeing on some coordination for both teams. a) in most cases leads to some degree of chaos and failure and b) establishes some kind of leadership on the team level (like you did in your first example) and c) results over time in some leadership one level up. I’d argue that some kind of coordination structure is needed. Where did the project you did implement in your first case come from? Somebody figure out that it would provide value to the company. Otherwise, you might have built a beautiful project that didn’t actually provide value. I think we agree that the company you worked in did have some management that provided value (I hope it was no moral maze). And I agree that a lot of managers do not add value and sometimes decrease it. On the other hand, I have worked for great team leads and professional managers. People who would listen, let us make our own decisions, give clear goals but also limits, help, and reduce impediments. This is really not a secret art. The principles are well-known (for a funny summary see e.g. Leadersheep). But it turns out that building a big organization is hard. Politics is real and professional management is still mostly a craft. It rarely approaches something you can call engineering much less hard science. And I am looking for that. That’s part of why I wrote this shortform on processes and roles. Everybody is just cooking with water and actual organization structures often leave something to be desired. I guess that’s why we do see extraordinary companies like Amazon sometimes—that hit on a sweet spot. But by talent or luck, not by science. And the others have to make do with inadequate solutions. Including the managers of which you maybe saw more than I did.
I have seen this happen also in a small team. Two or three guys started building each his own part independently, then it turned out those parts could not be put together; each of them insisted that others change their code to fit his API, and refused to make the smallest change in his API. It became a status fight that took a few days. (I don’t remember how it was resolved.)
In another company, there was a department that took care of everyone’s servers. Our test server crashed almost every day and had to be restarted manually; we had to file a ticket and wait (if it was after 4PM, the server was restarted only the next morning) because we did not have the permission to reset the server ourselves. It was driving us crazy; we had a dedicated team of testers, and half of the time they were just waiting for the server to be restarted; then the week before delivery we all worked overtime… that is, until the moment the server crashed again, then we filed the ticket and went home. We begged our manager to let us pool two hundred bucks and buy a notebook that we could turn into an alternative testing environment under our control, but of course that would be completely against company policy. Their manager refused to do anything about it; from their perspective, it meant they had every day one support ticket successfully closed by merely clicking a button; wonderful metric! From the perspective of our manager’s manager, it was a word against a word, one word coming from the team with great metrics and therefore more trustworthy. (The situation never got solved, as far as I know.)
...I should probably write a book one day. Except that no one would ever hire me afterwards. So maybe after I get retired...
So, yes, there are situations that require to be solved by greater power. In long term it might even make sense to fire a few people, but the problem is that these often seem to be the most productive ones, because other people are slowed down by the problems they cause.
Yeah, but we have two different meanings of the word “management” here. Someone who decides which project to do—this is useful and necessary. Or someone who interrupts you every day while you are trying to work on that project—I can imagine that in some teams this may also be necessary, but arguably then your problem is the team you have (at least some parts of it). Motte and bailey of management, sort of.
From epistemic perspective, I guess the problem is that if you keep micro-managing people all the time, you can never learn whether your activity actually adds or removes value, simply because there is nothing to compare to. (I guess the usual null hypothesis is “nobody ever does anything”, which of course make any management seem useful; but is it true?) Looking at the incentives and power relations, the employee at the bottom doesn’t have an opportunity to prove they could work just as well without the micro-management, and the manager doesn’t have an incentive to allow the experiment. There is also the “heads I win, tail you lose” aspect where bad employee performance is interpreted as necessity of more management, but good employee performance is interpreted as good management, so either way management is perceived as needed.
Yep. That’s a very good summary. Heh, I fail hard at step 1 (creating, or rather communicating a strong vision).
Seems analogical to social sciences: in theory, they are much more difficult than math or physics, so it would make sense if smarter people studied them; in practice, it’s the other way round, because if something is too difficult to do properly, it becomes easy to bullshit your way to the top, and intelligent people switch to something where being intelligent gives you a clear comparative advantage.
Good luck to you! I suppose your chances will depend on how much autonomy you get; it is hard to do things right, if the sources of problem are beyond your control. However, if you become a great manager and your people will like you, perhaps in the future you can start your own company and give them a call whether they would like to work for you again.
Thank you. I agree with your view. Motte and bailey of management yep. I especially liked this:
Shortform #23 Extant
I enjoyed today, but it definitely wasn’t a very productive day. I woke up late, then jumped in for an hour of virtual co-working and looked a resume templates / ideas + discussed resume and job site profile strategies. Afterwards I cleaned and organized the house for an hour and a half, followed by showering + getting ready. I drove into town for an early dinner outdoors with three friends then picked up my Dad from the airport, and have been relaxing since getting home.
I think the best thing to do is go to bed early and wake up tomorrow ready for a new day!
Yay it’s the weekend!
Willa
testing latex in spoiler tag
Testing code block in spoiler tag
:::what about this:::
:::hm?
x :: Bool -> Int -> String
:::::: latex Ax+1:={} :::
Master post for ideas about infra-Bayesianism.
Infra-Bayesianism can be naturally understood as semantics for a certain non-classical logic. This promises an elegant synthesis between deductive/symbolic reasoning and inductive/intuitive reasoning, with several possible applications. Specifically, here we will explain how this can work for higher-order logic. There might be holes and/or redundancies in the precise definitions given here, but I’m quite confident the overall idea is sound.
For simplicity, we will only work with crisp infradistributions, although a lot of this stuff can work for more general types of infradistributions as well. Therefore, □X will denote the space of crisp infradistribution. Given μ∈□X, S(μ)⊆ΔX will denote the corresponding convex set. As opposed to previously, we will include the empty-set, i.e. there is ⊥X∈□X s.t. S(⊥X)=∅. Given p∈ΔX and μ∈□X, p:μ will mean p∈S(μ). Given μ,ν∈□X, μ⪯ν will mean S(μ)⊆S(ν).
Syntax
Let Tι denote a set which we interpret as the types of individuals (we allow more than one). We then recursively define the full set of types T by:
0∈T (intended meaning: the uninhabited type)
1∈T (intended meaning: the one element type)
If α∈Tι then α∈T
If α,β∈T then α+β∈T (intended meaning: disjoint union)
If α,β∈T then α×β∈T (intended meaning: Cartesian product)
If α∈T then (α)∈T (intended meaning: predicates with argument of type α)
For each α,β∈T, there is a set F0α→β which we interpret as atomic terms of type α→β. We will denote V0α:=F01→α. Among those we distinguish the logical atomic terms:
idα∈F0α→α
0α∈F00→α
1α∈F0α→1
prαβ∈F0α×β→α
iαβ∈F0α→α+β
Symbols we will not list explicitly, that correspond to the algebraic properties of + and × (commutativity, associativity, distributivity and the neutrality of 0 and 1). For example, given α,β∈T there is a “commutator” of type α×β→β×α.
=α∈V0(α×α)
diagα∈F0α→α×α
()α∈V0((α)×α) (intended meaning: predicate evaluation)
⊥∈V0(1)
⊤∈V0(1)
∧α∈F0(α)×(α)→(α)
∨α∈F0(α)×(α)→(α)
∃αβ∈F0(α×β)→(β)
∀αβ∈F0(α×β)→(β)
Assume that for each n∈N there is some Dn⊆□[n]: the set of “describable” infradistributions (for example, it can be empty, or consist of all distributions with rational coefficients, or all distributions, or all infradistributions; EDIT: it is probably sufficient to only have the fair coin distribution in D2 in order for it to be possible to approximate all infradistributions on finite sets). If μ∈Dn then ┌μ┐∈V(∑ni=11)
We recursively define the set of all terms Fα→β. We denote Vα:=F1→α.
If f∈F0α→β then f∈Fα→β
If f1∈Fα1→β1 and f2∈Fα2→β2 then f1×f2∈Fα1×α2→β1×β2
If f1∈Fα1→β1 and f2∈Fα2→β2 then f1+f2∈Fα1+α2→β1+β2
If f∈Fα→β then f−1:F(β)→(α)
If f∈Fα→β and g∈Fβ→γ then g∘f∈Fα→γ
Elements of V(α) are called formulae. Elements of V(1) are called sentences. A subset of V(1) is called a theory.
Semantics
Given T⊆V(1), a model M of T is the following data. To each α∈T, there must correspond some compact Polish space M(t) s.t.:
M(0)=∅
M(1)=pt (the one point space)
M(α+β)=M(α)⊔M(β)
M(α×β)=M(α)×M(β)
M((α))=□M(α)
To each f∈Fα→β, there must correspond a continuous mapping M(f):M(α)→M(β), under the following constraints:
0, 1, id, pr, i, diag and the “algebrators” have to correspond to the obvious mappings.
M(=α)=⊤diagM(α). Here, diagX⊆X×X is the diagonal and ⊤C∈□X is the sharp infradistribution corresponding to the closed set C⊆X.
Consider α∈T and denote X:=M(α). Then, M(()α)=⊤□X⋉id□X. Here, we use the observation that the identity mapping id□X can be regarded as an infrakernel from □X to X.
M(⊥)=⊥pt
M(⊤)=⊤pt
S(M(∧)(μ,ν))=S(μ)∩S(ν)
S(M(∨)(μ,ν)) is the convex hull of S(μ)∪S(ν)
Consider α,β∈T and denote X:=M(α), Y:=M(β) and pr:X×Y→Y the projection mapping. Then, M(∃αβ)(μ)=pr∗μ.
Consider α,β∈T and denote X:=M(α), Y:=M(β) and pr:X×Y→Y the projection mapping. Then, p:M(∀αβ)(μ) iff pr−1∗(p)⊆S(μ).
M(f1×f2)=M(f1)×M(f2)
M(f1+f2)=M(f1)⊔M(f2)
M(f−1)(μ)=M(f)−1(μ). Notice that pullback of infradistributions is always defined thanks to adding ⊥ (the empty set infradistribution).
M(g∘f)=M(g)∘M(f)
M(┌μ┐)=μ
Finally, for each ϕ∈T, we require M(ϕ)=⊤pt.
Semantic Consequence
Given ϕ∈V(1), we say M⊨ϕ when M(ϕ)=⊤pt. We say T⊨ϕ when for any model M of T, M⊨ϕ. It is now interesting to ask what is the computational complexity of deciding T⊨ϕ. Classical 0-th order logic is embedded into infra-Bayesian higher-order logic, so the complexity is at least co-NP. On the other hand, known results about the word problem for lattices suggest that it’s plausible to be polynomial time for infra-Bayesian 0-th order logic. So, it is plausible that semantic consequence in infra-Bayesian higher-order logic has much lower complexity than its classical counterpart (the latter is not even recursively enumerable).
Applications
As usual, let A be a finite set of actions and O be a finite set of observation. Require that for each o∈O there is σo∈Tι which we interpret as the type of states producing observation o. Denote σ∗:=∑o∈Oσo (the type of all states). Moreover, require that our language has the nonlogical symbols s0∈V0(σ∗) (the initial state) and, for each a∈A, Ka∈F0σ∗→(σ∗) (the transition kernel). Then, every model defines a (pseudocausal) infra-POMDP. This way we can use symbolic expressions to define infra-Bayesian RL hypotheses. It is then tempting to study the control theoretic and learning theoretic properties of those hypotheses. Moreover, it is natural to introduce a prior which weights those hypotheses by length, analogical to the Solomonoff prior. This leads to some sort of bounded infra-Bayesian algorithmic information theory and bounded infra-Bayesian analogue of AIXI.
When using infra-Bayesian logic to define a simplicity prior, it is natural to use “axiom circuits” rather than plain formulae. That is, when we write the axioms defining our hypothesis, we are allowed to introduce “shorthand” symbols for repeating terms. This doesn’t affect the expressiveness, but it does affect the description length. Indeed, eliminating all the shorthand symbols can increase the length exponentially.
Instead of introducing all the “algebrator” logical symbols, we can define T as the quotient by the equivalence relation defined by the algebraic laws. We then need only two extra logical atomic terms:
For any n∈N and σ∈Sn (permutation), denote n:=∑ni=11 and require σ+∈Fn→n
For any n∈N and σ∈Sn, σ×α∈Fαn→αn
However, if we do this then it’s not clear whether deciding that an expression is a well-formed term can be done in polynomial time. Because, to check that the types match, we need to test the identity of algebraic expressions and opening all parentheses might result in something exponentially long.
In the anthropic trilemma, Yudkowsky writes about the thorny problem of understanding subjective probability in a setting where copying and modifying minds is possible. Here, I will argue that infra-Bayesianism (IB) leads to the solution.
Consider a population of robots, each of which in a regular RL agent. The environment produces the observations of the robots, but can also make copies or delete portions of their memories. If we consider a random robot sampled from the population, the history they observed will be biased compared to the “physical” baseline. Indeed, suppose that a particular observation c has the property that every time a robot makes it, 10 copies of them are created in the next moment. Then, a random robot will have c much more often in their history than the physical frequency with which c is encountered, due to the resulting “selection bias”. We call this setting “anthropic RL” (ARL).
The original motivation for IB was non-realizability. But, in ARL, Bayesianism runs into issues even when the environment is realizable from the “physical” perspective. For example, we can consider an “anthropic MDP” (AMDP). An AMDP has finite sets of actions (A) and states (S), and a transition kernel T:A×S→Δ(S∗). The output is a string of states instead of a single state, because many copies of the agent might be instantiated on the next round, each with their own state. In general, there will be no single Bayesian hypothesis that captures the distribution over histories that the average robot sees at any given moment of time (at any given moment of time we sample a robot out of the population and look at their history). This is because the distributions at different moments of time are mutually inconsistent.
[EDIT: Actually, given that we don’t care about the order of robots, the signature of the transition kernel should be T:A×S→ΔNS]
The consistency that is violated is exactly the causality property of environments. Luckily, we know how to deal with acausality: using the IB causal-acausal correspondence! The result can be described as follows: Murphy chooses a time moment n∈N and guesses the robot policy π until time n. Then, a simulation of the dynamics of (π,T) is performed until time n, and a single history is sampled from the resulting population. Finally, the observations of the chosen history unfold in reality. If the agent chooses an action different from what is prescribed, Nirvana results. Nirvana also happens after time n (we assume Nirvana reward 1 rather than ∞).
This IB hypothesis is consistent with what the average robot sees at any given moment of time. Therefore, the average robot will learn this hypothesis (assuming learnability). This means that for n≫11−γ≫0, the population of robots at time n has expected average utility with a lower bound close to the optimum for this hypothesis. I think that for an AMDP this should equal the optimum expected average utility you can possibly get, but it would be interesting to verify.
Curiously, the same conclusions should hold if we do a weighted average over the population, with any fixed method of weighting. Therefore, the posterior of the average robot behaves adaptively depending on which sense of “average” you use. So, your epistemology doesn’t have to fix a particular method of counting minds. Instead different counting methods are just different “frames of reference” through which to look, and you can be simultaneously rational in all of them.
Could you expand a little on why you say that no Bayesian hypothesis captures the distribution over robot-histories at different times? It seems like you can unroll an AMDP into a “memory MDP” that puts memory information of the robot into the state, thus allowing Bayesian calculation of the distribution over states in the memory MDP to capture history information in the AMDP.
I’m not sure what do you mean by that “unrolling”. Can you write a mathematical definition?
Let’s consider a simple example. There are two states: s0 and s1. There is just one action so we can ignore it.s0 is the initial state. An s0 robot transition into an s1 robot. An s1 robot transitions into an s0 robot and an s1 robot. How will our population look like?
0th step: all robots remember s0
1st step: all robots remember s0s1
2nd step: 1⁄2 of robots remember s0s1s0 and 1⁄2 of robots remember s0s1s1
3rd step: 1⁄3 of robots remembers s0s1s0s1, 1⁄3 of robots remember s0s1s1s0 and 1⁄3 of robots remember s0s1s1s1
There is no Bayesian hypothesis a robot can have that gives correct predictions both for step 2 and step 3. Indeed, to be consistent with step 2 we must have Pr[s0s1s0]=12 and Pr[s0s1s1]=12. But, to be consistent with step 3 we must have Pr[s0s1s0]=13, Pr[s0s1s1]=23.
In other words, there is no Bayesian hypothesis s.t. we can guarantee that a randomly sampled robot on a sufficiently late time step will have learned this hypothesis with high probability. The apparent transition probabilities keep shifting s.t. it might always continue to seem that the world is complicated enough to prevent our robot from having learned it already.
Or, at least it’s not obvious there is such a hypothesis. In this example, Pr[s0s1s1]Pr[s0s1s0] will converge to the golden ratio at late steps. But, do all probabilities converge fast enough for learning to happen, in general? I don’t know, maybe for finite state spaces it can work. Would definitely be interesting to check.
[EDIT: actually, in this example there is such a hypothesis but in general there isn’t, see below]
Great example. At least for the purposes of explaining what I mean :) The memory AMDP would just replace the states s0, s1 with the memory states [s0], [s1], [s0,s0], [s0,s1], etc. The action takes a robot in [s0] to memory state [s0,s1], and a robot in [s0,s1] to one robot in [s0,s1,s0] and another in [s0,s1,s1].
(Skip this paragraph unless the specifics of what’s going on aren’t obvious: given a transition distribution P(s′∗|s,π) (P being the distribution over sets of states s’* given starting state s and policy π), we can define the memory transition distribution P(s′∗m|sm,π) given policy π and starting “memory state” sm∈S∗ (Note that this star actually does mean finite sequences, sorry for notational ugliness). First we plug the last element of sm into the transition distribution as the current state. Then for each s′∗ in the domain, for each element in s′∗ we concatenate that element onto the end of sm and collect these s′m into a set s′∗m, which is assigned the same probability P(s′∗).)
So now at time t=2, if you sample a robot, the probability that its state begins with [s0,s1,s1] is 0.5. And at time t=3, if you sample a robot that probability changes to 0.66. This is the same result as for the regular MDP, it’s just that we’ve turned a question about the history of agents, which may be ill-defined, into a question about which states agents are in.
I’m still confused about what you mean by “Bayesian hypothesis” though. Do you mean a hypothesis that takes the form of a non-anthropic MDP?
I’m not quite sure what are you trying to say here, probably my explanation of the framework was lacking. The robots already remember the history, like in classical RL. The question about the histories is perfectly well-defined. In other words, we are already implicitly doing what you described. It’s like in classical RL theory, when you’re proving a regret bound or whatever, your probability space consists of histories.
Yes, or a classical RL environment. Ofc if we allow infinite state spaces, then any environment can be regarded as an MDP (whose states are histories). That is, I’m talking about hypotheses which conform to the classical “cybernetic agent model”. If you wish, we can call it “Bayesian cybernetic hypothesis”.
Also, I want to clarify something I was myself confused about in the previous comment. For an anthropic Markov chain (when there is only one action) with a finite number of states, we can give a Bayesian cybernetic description, but for a general anthropic MDP we cannot even if the number of states is finite.
Indeed, consider some T:S→ΔNS. We can take its expected value to get ET:S→RS+. Assuming the chain is communicating, ET is an irreducible non-negative matrix, so by the Perron-Frobenius theorem it has a unique-up-to-scalar maximal eigenvector η∈RS+. We then get the subjective transition kernel:
ST(t∣s)=ET(t∣s)ηt∑t′∈SET(t′∣s)ηt′
Now, consider the following example of an AMDP. There are three actions A:={a,b,c} and two states S:={s0,s1}. When we apply a to an s0 robot, it creates two s0 robots, whereas when we apply a to an s1 robot, it leaves one s1 robot. When we apply b to an s1 robot, it creates two s1 robots, whereas when we apply b to an s0 robot, it leaves one s0 robot. When we apply c to any robot, it results in one robot whose state is s0 with probability 12 and s1 with probability 12.
Consider the following two policies.πa takes the sequence of actions cacaca… and πb takes the sequence of actions cbcbcb…. A population that follows πa would experience the subjective probability ST(s0∣s0,c)=23, whereas a population that follows πb would experience the subjective probability ST(s0∣s0,c)=13. Hence, subjective probabilities depend on future actions. So, effectively anthropics produces an acausal (Newcomb-like) environment. And, we already know such environments are learnable by infra-Bayesian RL agents and, (most probably) not learnable by Bayesian RL agents.
Ah, okay, I see what you mean. Like how preferences are divisible into “selfish” and “worldly” components, where the selfish component is what’s impacted by a future simulation of you that is about to have good things happen to it.
(edit: The reward function in AMDPs can either be analogous to “wordly” and just sum the reward calculated at individual timesteps, or analogous to “selfish” and calculated by taking the limit of the subjective distribution over parts of the history, then applying a reward function to the expected histories.)
I brought up the histories->states thing because I didn’t understand what you were getting at, so I was concerned that something unrealistic was going on. For example, if you assume that the agent can remember its history, how can you possibly handle an environment with memory-wiping?
In fact, to me the example is still somewhat murky, because you’re talking about the subjective probability of a state given a policy and a timestep, but if the agents know their histories there is no actual agent in the information-state that corresponds to having those probabilities. In an MDP the agents just have probabilities over transitions—so maybe a clearer example is an agent that copies itself if it wins the lottery having a larger subjective transition probability of going from gambling to winning. (i.e. states are losing and winning, actions are gamble and copy, the policy is to gamble until you win and then copy).
AMDP is only a toy model that distills the core difficulty into more or less the simplest non-trivial framework. The rewards are “selfish”: there is a reward function r:(S×A)∗→R which allows assigning utilities to histories by time discounted summation, and we consider the expected utility of a random robot sampled from a late population. And, there is no memory wiping. To describe memory wiping we indeed need to do the “unrolling” you suggested. (Notice that from the cybernetic model POV, the history is only the remembered history.)
For a more complete framework, we can use an ontology chain, but (i) instead of A×O labels use A×M labels, where M is the set of possible memory states (a policy is then described by π:M→A), to allow for agents that don’t fully trust their memory (ii) consider another chain with a bigger state space S′ plus a mapping p:S′→NS s.t. the transition kernels are compatible. Here, the semantics of p(s) is: the multiset of ontological states resulting from interpreting the physical state s by taking the viewpoints of different agents s contains.
I didn’t understand “no actual agent in the information-state that corresponds to having those probabilities”. What does it mean to have an agent in the information-state?
Nevermind, I think I was just looking at it with the wrong class of reward function in mind.
There is a formal analogy between infra-Bayesian decision theory (IBDT) and modal updateless decision theory (MUDT).
Consider a one-shot decision theory setting. There is a set of unobservable states S, a set of actions A and a reward function r:A×S→[0,1]. An IBDT agent has some belief β∈□S[1], and it chooses the action a∗:=argmaxa∈AEβ[λs.r(a,s)].
We can construct an equivalent scenario, by augmenting this one with a perfect predictor of the agent (Omega). To do so, define S′:=A×S, where the semantics of (p,s) is “the unobservable state is s and Omega predicts the agent will take action p”. We then define r′:A×S′→[0,1] by r′(a,p,s):=1a=pr(a,s)+1a≠p and β′∈□S′ by Eβ′[f]:=minp∈AEβ[λs.f(p,s)] (β′ is what we call the pullback of β to S′, i.e we have utter Knightian uncertainty about Omega). This is essentially the usual Nirvana construction.
The new setup produces the same optimal action as before. However, we can now give an alternative description of the decision rule.
For any p∈A, define Ωp∈□S′ by EΩp[f]:=mins∈Sf(p,s). That is, Ωp is an infra-Bayesian representation of the belief “Omega will make prediction p”. For any u∈[0,1], define Ru∈□S′ by ERu[f]:=minμ∈ΔS′:Eμ[r(p,s)]≥uEμ[f(p,s)]. Ru can be interpreted as the belief “assuming Omega is accurate, the expected reward will be at least u”.
We will also need to use the order ⪯ on □X defined by: ϕ⪯ψ when ∀f∈[0,1]X:Eϕ[f]≥Eψ[f]. The reversal is needed to make the analogy to logic intuitive. Indeed, ϕ⪯ψ can be interpreted as ”ϕ implies ψ“[2], the meet operator ∧ can be interpreted as logical conjunction and the join operator ∨ can be interpreted as logical disjunction.
Claim:
a∗=argmaxa∈Amax{u∈[0,1]∣β′∧Ωa⪯Ru}
(Actually I only checked it when we restrict to crisp infradistributions, in which case ∧ is intersection of sets and ⪯ is set containment, but it’s probably true in general.)
Now, β′∧Ωa⪯Ru can be interpreted as “the conjunction of the belief β′ and Ωa implies Ru”. Roughly speaking, “according to β′, if the predicted action is a then the expected reward is at least u”. So, our decision rule says: choose the action that maximizes the value for which this logical implication holds (but “holds” is better thought of as “is provable”, since we’re talking about the agent’s belief). Which is exactly the decision rule of MUDT!
Apologies for the potential confusion between □ as “space of infradistrubutions” and the □ of modal logic (not used in this post). ↩︎
Technically it’s better to think of it as ”ψ is true in the context of ϕ”, since it’s not another infradistribution so it’s not a genuine implication operator. ↩︎
Shortform #22 Packing, Organizing, and Preparing.
Today was a good day.
I spent most of it going through things, throwing away, organizing, sorting, and packing said things depending on what they were, and got a lot done in preparation for moving because of that. I’m looking forward to finishing up my resume tomorrow and getting feedback on it then finishing up my profile on the job sites I made an account on.
I’ve enjoyed watching the most recent season (part 3 IIRC) of Disenchantment as well, and apparently The Magicians has a new season too, exciting!
Happy Friday Y’all :)
Willa
How do you effectively represent knowledge in learning?
how do you effectively apply recognition primed decision making rather than lesswrong biases?
https://www.youtube.com/watch?v=n5OO9L67jL4
Is breadth of knowledge, depth of knowledge or applicability of knowledge more important? Whatever the answer is, how do you more dakka the hell out of it?
Does anyone know of a good technical overview of why it seems hard to get Whole Brain Emulations before we get neuromorphic AGI?
I think maybe I read a PDF that made this case years ago, but I don’t know where.
I haven’t seen such a document but I’d be interested to read it too. I made an argument to that effect here: https://www.lesswrong.com/posts/PTkd8nazvH9HQpwP8/building-brain-inspired-agi-is-infinitely-easier-than
(Well, a related argument anyway. WBE is about scanning and simulating the brain rather than understanding it, but I would make a similar argument using “hard-to-scan” and/or “hard-to-simulate” things the brain does, rather than “hard-understand” things the brain does, which is what I was nominally blogging about. There’s a lot of overlap between those anyway; the examples I put in mostly work for both.)
Great. This post is exactly the sort of thing that I was thinking about.
I remember reading a Zvi Mowshowitz post in which he says something like “if you have concluded that the most ethical thing to do is to destroy the world, you’ve made a mistake in your reasoning somewhere.”
I spent some time search around his blog for that post, but couldn’t find it. Does anyone know what I’m talking about?
It sounds like a tagline for a blog.
Probably this one?
http://lesswrong.com/posts/XgGwQ9vhJQ2nat76o/book-trilogy-review-remembrance-of-earth-s-past-the-three
Thanks!
I thought that it was in the context of talking about EA, but maybe this is what I am remembering?
It seems unlikely though, since wouldn’t have read the spoiler-part.
Double cruxes aren’t supposed to be something you win or lose, as I understand it—a double crux is a collaborative effort to help both parties arrive at a better understanding of the truth. It’s problematic when admitting that you’re wrong, and changing your mind is called “losing”
Strong agree.
People can loose debates, but debate != Double Crux.
I will note that I’m surprised that this currently stands at negative karma (-1)
Shortform #21 Functional strength training and job hunting, oh my.
I had a most excellent day :)
I created accounts on job posting sites and started hunting.
Joined a discord video call with two friends and we did 30 minutes of functional strength training together, I am now really sore, but am happy I worked out!
I did virtual co-working for ~3 hours.
My resume is out of date and pretty bad, I’ll fix it up tomorrow using RMarkdown and other nice R things so that my newly created resume will be up to date AND pretty / well styled. I’m meeting (virtually) with a friend on Saturday who runs career building and resume workshops and they have graciously agreed to review and give me feedback on the newly created resume. Thank you to them!
Once I have a new and up to date resume, I can add that to all the job sites I signed up at and finish making + polishing my profile on all of them.
I currently run my website on an AWS Lightsail instance with Wordpress as the CMS. I don’t think that’s working for me, and the website isn’t paying rent design-wise, content-wise, nor financially (though it is really really really cheap to operate, so I’m not losing much). So, in addition to LW2019Review writing, I’m going to make time (that doesn’t subtract from job hunting time) to redo my website and axe Wordpress as my CMS since I don’t like it. Using a static-site generator and adding a little bit of custom stuff (I really like the functionality and design of Gwern’s website so I will steal inspiration from there) will probably result in a much nicer looking, easier to manage, and more functional (for what I care about) site, so I’ll do those things.
A weird side effect of job hunting today has been a really strong desire to code. Guess I’ll be doing much more of that going forward.
Be well!
Cheers,
Willa
In a bayesian rationalist view of the world, we assign probabilities to statements based on how likely we think they are to be true. But truth is a matter of degree, as Asimov points out. In other words, all models are wrong, but some are less wrong than others.
Consider, for example, the claim that evolution selects for reproductive fitness. Well, this is mostly true, but there’s also sometimes group selection, and the claim doesn’t distinguish between a gene-level view and an individual-level view, and so on...
So just assigning it a single probability seems inadequate. Instead, we could assign a probability distribution over its degree of correctness. But because degree of correctness is such a fuzzy concept, it’d be pretty hard to connect this distribution back to observations.
Or perhaps the distinction between truth and falsehood is sufficiently clear-cut in most everyday situations for this not to be a problem. But questions about complex systems (including, say, human thoughts and emotions) are messy enough that I expect the difference between “mostly true” and “entirely true” to often be significant.
Has this been discussed before? Given Less Wrong’s name, I’d be surprised if not, but I don’t think I’ve stumbled across it.
This feels generally related to the problems covered in Scott and Abram’s research over the past few years. One of the sentences that stuck out to me the most was (roughly paraphrased since I don’t want to look it up):
I.e. our current formulations of bayesianism like solomonoff induction only formulate the idea of a hypothesis at such a low level that even trying to think about a single hypothesis rigorously is basically impossible with bounded computational time. So in order to actually think about anything you have to somehow move beyond naive bayesianism.
This seems reasonable, thanks. But I note that “in order to actually think about anything you have to somehow move beyond naive bayesianism” is a very strong criticism. Does this invalidate everything that has been said about using naive bayesianism in the real world? E.g. every instance where Eliezer says “be bayesian”.
One possible answer is “no, because logical induction fixes the problem”. My uninformed guess is that this doesn’t work because there are comparable problems with applying to the real world. But if this is your answer, follow-up question: before we knew about logical induction, were the injunctions to “be bayesian” justified?
(Also, for historical reasons, I’d be interested in knowing when you started believing this.)
I think it definitely changed a bunch of stuff for me, and does at least a bit invalidate some of the things that Eliezer said, though not actually very much.
In most of his writing Eliezer used bayesianism as an ideal that was obviously unachievable, but that still gives you a rough sense of what the actual limits of cognition are, and rules out a bunch of methods of cognition as being clearly in conflict with that theoretical ideal. I did definitely get confused for a while and tried to apply Bayes to everything directly, and then felt bad when I couldn’t actually apply bayes theorem in some situations, which I now realize is because those tended to be problems where embededness or logical uncertainty mattered a lot.
My shift on this happened over the last 2-3 years or so. I think starting with Embedded Agency, but maybe a bit before that.
Which ones? In Against Strong Bayesianism I give a long list of methods of cognition that are clearly in conflict with the theoretical ideal, but in practice are obviously fine. So I’m not sure how we distinguish what’s ruled out from what isn’t.
Can you give an example of a real-world problem where logical uncertainty doesn’t matter a lot, given that without logical uncertainty, we’d have solved all of mathematics and considered all the best possible theories in every other domain?
I think in-practice there are lots of situations where you can confidently create a kind of pocket-universe where you can actually consider hypotheses in a bayesian way.
Concrete example: Trying to figure out who voted a specific way on a LW post. You can condition pretty cleanly on vote-strength, and treat people’s votes as roughly independent, so if you have guesses on how different people are likely to vote, it’s pretty easy to create the odds ratios for basically all final karma + vote numbers and then make a final guess based on that.
It’s clear that there is some simplification going on here, by assigning static probabilities for people’s vote behavior, treating them as independent (though modeling some subset of independence wouldn’t be too hard), etc.. But overall I expect it to perform pretty well and to give you good answers.
(Note, I haven’t actually done this explicitly, but my guess is my brain is doing something pretty close to this when I do see vote numbers + karma numbers on a thread)
Well, it’s obvious that anything that claims to be better than the ideal bayesian update is clearly ruled out. I.e. arguments that by writing really good explanations of a phenomenon you can get to a perfect understanding. Or arguments that you can derive the rules of physics from first principles.
There are also lots of hypotheticals where you do get to just use Bayes properly and then it provides very strong bounds on the ideal approach. There are a good number of implicit models behind lots of standard statistics models that when put into a bayesian framework give rise to a more general formulation. See the Wikipedia article for “Bayesian interpretations of regression” for a number of examples.
Of course, in reality it is always unclear whether the assumptions that give rise to various regression methods actually hold, but I think you can totally say things like “given these assumption, the bayesian solution is the ideal one, and you can’t perform better than this, and if you put in the computational effort you will actually achieve this performance”.
Are you able to give examples of the times you tried to be Bayesian and it failed because embedded was?
Scott and Abram? Who? Do they have any books I can read to familiarize myself with this discourse?
Scott: https://lesswrong.com/users/scott-garrabrant
Abram: https://lesswrong.com/users/abramdemski
Scott Garrabrant and Abram Demski, two MIRI researchers.
For introductions to their work, see the Embedded Agency sequence, the Consequences of Logical Induction sequence, and the Cartesian Frames sequence.
See the section about scoring rules in the Technical Explanation.
Hmmm, but what does this give us? He talks about the difference between vague theories and technical theories, but then says that we can use a scoring rule to change the probabilities we assign to each type of theory.
But my question is still: when you increase your credence in a vague theory, what are you increasing your credence about? That the theory is true?
Nor can we say that it’s about picking the “best theory” out of the ones we have, since different theories may overlap partially.
If we can quantify how good a theory is at making accurate predictions (or rather, quantify a combination of accuracy and simplicity), that gives us a sense in which some theories are “better” (less wrong) than others, without needing theories to be “true”.
Related but not identical: this shortform post.
(This is a basic point on conjunctions, but I don’t recall seeing its connection to Occam’s razor anywhere)
When I first read Occam’s Razor back in 2017, it seemed to me that the essay only addressed one kind of complexity: how complex the laws of physics are. If I’m not sure whether the witch did it, the universes where the witch did it are more complex, and so these explanations are exponentially less likely under a simplicity prior. Fine so far.
But there’s another type. Suppose I’m weighing whether the United States government is currently engaged in a vast conspiracy to get me to post this exact comment? This hypothesis doesn’t really demand a more complex source code, but I think we’d say that Occam’s razor shaves away this hypothesis anyways—even before weighing object-level considerations. This hypothesis is complex in a different way: it’s highly conjunctive in its unsupported claims about the current state of the world. Each conjunct eliminates many ways it could be true, from my current uncertainty, and so should I deem it correspondingly less likely.
I agree with the principle but I’m not sure I’d call it “Occam’s razor”. Occam’s razor is a bit sketchy, it’s not really a guarantee of anything, it’s not a mathematical law, it’s like a rule of thumb or something. Here you have a much more solid argument: multiplying many probabilities into a conjunction makes the result smaller and smaller. That’s a mathematical law, rock-solid. So I’d go with that...
My point was more that “people generally call both of these kinds of reasoning ‘Occam’s razor’, and they’re both good ways to reason, but they work differently.”
Oh, hmm, I guess that’s fair, now that you mention it I do recall hearing a talk where someone used “Occam’s razor” to talk about the solomonoff prior. Actually he called it “Bayes Occam’s razor” I think. He was talking about a probabilistic programming algorithm.
That’s (1) not physics, and (2) includes (as a special case) penalizing conjunctions, so maybe related to what you said. Or sorry if I’m still not getting what you meant
The structure of knowledge is an undirected cyclic graph between concepts. To make it easier to present to the novice, experts convert that graph into a tree structure by removing some edges. Then they convert that tree into natural language. This is called a textbook.
Scholarship is the act of converting the textbook language back into nodes and edges of a tree, and then filling in the missing edges to convert it into the original graph.
The mind cannot hold the entire graph in working memory at once. It’s as important to practice navigating between concepts as learning the concepts themselves. The edges are as important to the structure as the nodes. If you have them all down pat, then you can easily get from one concept to another.
It’s not always necessary to memorize every bit of knowledge. Part of the graph is knowing which things to memorize, which to look up, and where to refer to if you need to look something up.
Feeling as though you’ve forgotten is not easily distinguishable from never having learned something. When people consult their notes and realize that they can’t easily call to mind the concepts they’re referencing, this is partly because they’ve never practiced connecting the notes to the concepts. There are missing edges on the graph.
Shortform #20 It’s time to hunt down a job!
Today was marvelous :)
I walked 2.18 miles indoors while on phone calls; I did 20 pushups and 100 situps at noon.
I virtually co-worked for about 2 hours and made good progress towards writing my review of Gears-Level Models are Capital Investments.
I began organizing and packing up in preparation for moving.
It was pointed out to me that I keep working on a bunch of different things but haven’t yet started searching for jobs, despite that finding a good job will enable me to move to Seattle and do more fun things in life. Point noted and taken to heart!
Job hunting is now my highest priority, and I will be focusing on that exclusively while virtually co-working plus will do that while doing productive stuff by myself too. I will continue writing my three reviews (for the LW2019Review) during non-workday hours / in my spare time, but my workday hours will be focused on job hunting.
Good luck y’all :)
Cheers,
Willa
An acquaintance recently started a FB post with “I feel like the entire world has gone mad.”
My acquaintance was maybe being a bit humorous; nevertheless, I was reminded of this old joke:
I guess it’s my impression that a lot of people have the “I feel large chunks of the world have gone mad” thing going, who didn’t have it going before (or not this much or this intensely). (On many sides, and not just about the Blue/Red Trump/Biden thing.) I am curious whether this matches others’ impressions. (Or if anyone has studies/polls/etc. that might help with this.)
Separately but relatedly, I would like to be on record as predicting that the amount of this (of people feeling that large numbers of people are totally batshit on lots of issues) is going to continue increasing across the next several years. And is going to spread further beyond a single axis of politicization, to happen almost everywhere.
I’m very open to bets on this topic, if anybody has a suitable operationalization.
I’m also interested in thinking on what happens next, if a very large increase of this sort does occur.
You are a bit late with your prediction ;-)
But seriously, have you seen this?: https://www.reddit.com/r/slatestarcodex/comments/ktviiv/will_the_us_really_experience_a_violent_upheaval/
One of the most important things going on right now, that people aren’t paying attention to: Kevin Buzzard is (with others) formalizing the entire undergraduate mathematics curriculum in Lean. (So that all the proofs will be formally verified.)
See one of his talks here:
Sorry for the stupid question, and I liked the talk and agree it’s a really neat project, but why is it so important? Do you mean important for math, or important for humanity / the future / whatever?
Mostly it just seems significant in the grand scheme of things. Our mathematics is going to become formally verified.
In terms of actual consequences, it’s maybe not so important on its own. But putting a couple pieces together (this, Dan Selsam’s work, GPT), it seems like we’re going to get much better AI-driven automated theorem proving, formal verification, code generation, etc relatively soon.
I’d expect these things to start meaningfully changing how we do programming sometime in the next decade.
Yeah, I get some aesthetic satisfaction from math results being formally verified to be correct. But we could just wait until the AGIs can do it for us… :-P
Yeah, it would be cool and practically important if you could write an English-language specification for a function, then the AI turns it into a complete human-readable formal input-output specification, and then the AI also writes code that provably meets that specification.
I don’t have a good sense for how plausible that is—I’ve never been part of a formally-verified software creation project. Just guessing, but the second part (specification → code) seems like the kind of problem that AIs will solve in the next decade. Whereas the first part (creating a complete formal specification) seems like it would be the kind of thing where maybe the AI proposes something but then the human needs to go back and edit it, because you can’t get every detail right unless you understand the whole system that this function is going to be part of. I dunno though, just guessing.
The workflow I’ve imagined is something like:
human specifies function in English
AI generates several candidate code functions
AI generates test cases for its candidate functions, and computes their results
AI formally analyzes its candidate functions and looks for simple interesting guarantees it can make about their behavior
AI displays its candidate functions to the user, along with a summary of the test results and any guarantees about the input output behavior, and the user selects the one they want (which they can also edit, as necessary)
In this version, you go straight from English to code, which I think might be easier than from English to formal specification, because we have lots of examples of code with comments. (And I’ve seen demos of GPT-3 doing it for simple functions.)
I think some (actually useful) version of the above is probably within reach today, or in the very near future.
Seems reasonable.
It seems to me that months ago, we should have been founding small villages or towns that enforce contact tracing and required quarantines, both for contacts of people who are known to have been exposed, and for people coming in from outside the bubble. I don’t think this is possible in all states, but I’d be surprised if there was no state where this is possible.
I think it’d be much simpler to find the regions/towns doing this, and move there. Even if there’s no easy way to get there or convince them to let you in, it’s likely STILL more feasible than setting up your own.
If you do decide to do it yourself, why is a village or town the best unit? It’s not going to be self-sufficient regardless of what you do, so why is a town/village better than an apartment building or floor (or shared- or non-shared house)?
In any case, if this was actually a good idea months ago, it probably still is. Like planting a tree, the best time to do it is 20 years ago, and the second-best time is now.
Are there any areas in the states doing this? I would go to NZ or South Korea, but getting there is a hassle compared to going somewhere in the states. Regarding size, it’s not about self-sufficiency, but rather being able to interact in a normal way with other people around me without worrying about the virus, so the more people involved the better
That was my point. Doesn’t the hassle of CREATING a town seem incomparably larger than the hassle of getting to one of these places.
On an individual basis, I definitely agree. Acting alone, it would be easier for me to personally move to NZ or SK than to found a new city. However, from a collective perspective (and if the LW community isn’t able to cordinate collective action, then it has failed), if a group of 50 − 1000 people all wanted to live in a place with sane precautions, and were willing to put in effort, creating a new town in the states will scale better (moving countries has effort scaling linearly with magnitude of population flux, while founding a town scales less than linearly)
Oh, we’re talking about different things. I don’t know much about any “LW community”, I just use LW for sharing information, models, and opinions with a bunch of individuals. Even if you call that a “community”, as some do, it doesn’t coordinate any significant collective action. I guess it’s failed?
Sorry, I don’t think I suceeded at speaking with clarity there. The way you use LW is perfectly fine and good.
My view of LW is that it’s a site dedicated to rationality, both epistemic and instrumental. Instrumental rationality is, as Eliezer likes to call it, “the art of winning”. The art of winning often calls for collective action to achieve the best outcomes, so if collective action never comes about, then that would indicate a failure of instrumental rationality, and thereby a failure of the purpose of LW.
LW hasn’t failed. While I have observed some failures of the collective userbase to properly engage in collective action to the fullest extent, I find it does often succeed in creating collective action, often thanks to the deliberate efforts of the LW team.
Fair enough, and I was a bit snarky in my response. I still have to wonder, if it’s not worth the hassle for a representative individual to move somewhere safer, why we’d expect it’s worth a greater hassle (both individually and the coordination cost) to create a new town. Is this the case where rabbits are negative value so stags are the only option (reference: https://www.lesswrong.com/posts/zp5AEENssb8ZDnoZR/the-schelling-choice-is-rabbit-not-stag)? I’d love to see some cost/benefit estimates to show that it’s even close to reasonable, compared to just isolating as much as possible individually.
I think you’re omitting constant factors from your analysis; founding a town is so, so much work. How would you even run out utilities to the town before the pandemic ended?
I acknowledge that I don’t know how the effort needed to found a livable settlement compares to the effort needed to move people from the US to a Covid-good country. If I knew how many person-hours each of these would take, it would be easier for me to know whether or not my idea doesn’t make sense.
FYI, folk at MIRI seem to be actively look into this, but, it is indeed pretty expensive and not an obviously good idea.