oof, good catch, fixed.
0.75 to 0.95 vs.0.75 to 0.9 is strictly my transcription bug, not being careful enough.
0.75 to 0.95
0.75 to 0.9
In general I wasn’t auditing the code from the Jonas Moss comment, I just stepped through looking at the functionality. I should’ve been more careful, if I was going to make a claim about the conversion factor.
You’re kinda right about the question “if it’s a constant number of lines written exactly once, does it really count as boilerplate?” I can see how it feels a little dishonest of me to imply that the ratio is really 15:1. The example I was thinking of was the Biological Anchors Report (“Ajeya’s Timeilnes”), those notebooks have lots of LOC in hidden cells, but the relative cost of those goes down as the length of the report goes up. All that considered, I could be updated to the idea that the boilerplate point is moot for power users (who are probably able and willing to provide that boilerplate once per file), but I would still be excited about what is opened up for more casual users.
You’re right that, or your comment is suggesting to me indirectly that, squiggle, having not yet provided a way to give non-default quantiles with the to syntax, hasn’t done anything to show that it’d really beat hand-crafted python functions, to accomplish this.
Re the underlying squiggle notebook concerning GiveDirectly and so on, I’ve flagged your comment to Sam (it’s something else I haven’t taken a close look at).
Yes, the problem is real. I’d try your solution if it existed.
Optimal for me would be emacs or vscode keybindings, not the 4-fingers of tablet computing.
Unlikely, see here (Rohin wrote a TLDR for alignment newsletter, see the comment).
Some of what follows is similar to something I wrote on EA Forum a month or so ago.
Returns on meatspace are counterfactually important to different people to different degrees. I think it’s plausible that some people simply can’t keep their eye on the ball if they’re not getting consistent social rewards for trying to do the thing, or that the added bandwidth you get when you move from discord to meatspace actually provides game-changing information.
I have written that if you’re not this type who super needs to be in meatspace with their tribe, who can cultivate and preserve agentiness online, that it may be imperative for you to defect in the “everyone move to the bay game” specifically to guard against brain drain, because people who happen to live in non-bay cities really do, I think, deserve access to agenty/ambitious people working on projects. An underrated movement building theory of change is that someone fails out of the university entrance exam in Minneapolis, and we’re there to support them.
However, I’m decreasingly interested in my hypothesis about why brain drain is even bad. I’m not sure the few agenty people working on cool projects in Philly are really doing all that much for the not-very-agenty sections of the movement that happen to live in Philly, which is a conclusion I really didn’t want to draw, but I’ve had way too much of going to an ACX or EA meetup and meeting some nihilist-adjacent guy who informs me that via free will being fake trying to fix problems is pointless. People have to want to cultivate ambition/agentiness and epistemics before I can really add any value, I’m concluding. I read this as a point against heeding the brain drain concern. There’s a sense in which I can take PG’s post about cities very seriously then conclude that the nihilist-adjacent guy is a property of Philly, and conclude that it’s really important for me to try other cities since what I’m bringing to Philly is being wasted and Philly isn’t bringing a lot to me. There’s another sense in which I take PG’s post seriously but I think Philly isn’t unique among not-quite-top-5 US cities, and another sense in which I don’t take PG’s post seriously. The fourth sense, crucially, is that my personal exhaustion with nihilist-adjacent guy doesn’t actually relate to the value I can add if I’m there for someone when they flunk out of the university entrance exam (I want a shapley points allocation for saving a billion lives, dammit!).
Another remark is that a friend who used to live in the bay once informed me that “yeah you meet people working on projects very much all the time, but so many of the projects are kinda dumb”. So I may end up being just as frustrated with the Bay as I am with Philly if I tried living there. Uncertain.
I was reminiscing about my prediction market failures, the clearest “almost won a lot of mana dollars” (if manifold markets had existed back then) was this executive order. The campaign speeches made it fairly obvious, and I’m still salty about a few idiots telling me “stop being hysterical” when I accused him of being what he’s writing on the tin that he is pre inauguration even though I overall reminisce that being a time when my epistemics were way worse than they are now.
However, there does seem like there needs to be a word for “lack of shock but failed to predict concretely”. We were threatmodeling a ton of crazy stuff back then! So what if you can econo-splain “well if you didn’t predict concretely then you were, by definition, shocked”, the more useful and accurate thing sounds more like “we were worried about various classes of populist atrocities, some of which would look hysterical in hindsight, those which would look hysterical in hindsight crowded out the ability to write detailed executive orders just to win the mana dollars / bayes points / etc.”. Early onsets of a populist swing are so anxiety-inducing and chaotic, I forgive myself for making an at least token attempt at security mindset by thinking about how bad it could get, but I shouldn’t do so too quickly—a post manifold markets populist would give me a great opportunity to take things seriously, put a little of that anxiety to use.
So of course, what is the institutional role of metaculus or manifold in the leadup to january 6 2021, or things in that reference class? Again, “didn’t write down a detailed description of what would happen, but isn’t shocked when it does”. It cost 0 IQ points to observe in the months leading up to the election that the administration would be a sore loser in worlds where they lost. So why is it so subtle to leverage this observation to gain actual mana dollars or metaculus ranking? This seems like an open problem to me.
Is there an EV monad? I’m inclined to think there is not, because EV(EV(X)) is a way simpler structure than a “flatmap” analogue.
I find myself, just as a random guy, deeply impressed at the operational competence of airports and hospitals. Any good books about that sort of thing?
Stuart Russell in the FLI podcast debate outlined things like instrumental convergence and corrigibility, though it took a backseat to his own standard/nonstandard model approach, and challenged him to publish reasons why he’s not compelled to panic in a journal, but warned him that many people would emerge to tinker with and poke holes in his models.
The main thing I remember from that debate is that Pinker thinks the AI xrisk community is needlessly projecting “will to power” (as in the nietzschean term) onto software artifacts.
You may be interested: the NARS literature describes a system that encounters goals as atoms and uses them to shape the pops from a data structure they call bag, which is more or less a probabilistic priority queue. It can do “competing priorities” reasoning as a natural first class citizen, and supports mutation of goals.
But overall your question is something I’ve always wondered about.
I made an attempt to write about it here, I refer systems of fixed/axiomatic goals as “AIXI-like” and systems of driftable/computational goals “AIXI-unlike”.
I share your intuition that this razor seems critical to mathematizing agency! I can conjecture about why we do not observe it in the literature:
Perhaps agent foundations researchers, in some verbal/tribal knowledge that is on the occasional whiteboard in berkeley but doesn’t get written up, reason that if goals are a function of time, the image of a sequence of discretized time steps forms a multi-objective optimization problem.
Maybe agent foundations researchers believe that just fixing the totally borked situation of optimization and decision theory with fixed goals costs 10 to 100 tao-years, and that doing it with unfixed goals costs 100 to 1000 tao-years.
Incorrigibility is the desire to preserve goal-content integrity, right? This implies that as time goes to infinity, the agent will desire for the goal to stabilize/converge/become constant. How does it act on this desire? Unclear to me. I’m deeply, wildly confused, as a matter of fact.
(Edited to make headings H3 instead of H1)
Jotted down some notes about the law of mad science on the EA Forum. Looks like some pretty interesting open problems in the global priorities, xrisk strategy space.
Two premises of mine are that I’m more ambitious than nearly everyone I meet in meatspace and normal distributions. This implies that in any relationship, I should expect to be the more ambitious one.
I do aspire to be a nagging voice increasing the ambitions of all my friends. I literally break the ice with acquaintances by asking “how’s your master plan going?” because I try to create vibes like we’re having coffee in the hallway of a supervillain conference, and I like to also ask “what harder project is your current project a warmup for?”.
I’m mostly sure I want kids. I told a gf recently (who does not want kids) that if it seemed like someone would be a good coparent, but they made me less ambitious, I would accept the bargain. But what’s implicit premise here?
The premise is of course that in relationships, you drift toward the average of yourself and the other person. Is this plausibly true?
I think there’s a folk wisdom about friendships, which generalizes to romance, that you’re a weighted average of your influences, so you should exercise caution in picking your influences.
Also—autonomy to leave a deadend job and go to EA Hotel was an important part of my ability to cultivate ambition. What price should I put on giving up that autonomy?
However, according to Owain’s comment here, there’s not a super good reason to expect children to decrease ambition. But it’s complicated—that dataset doesn’t express parenting quality.
One comment you could make is “move to the bay and you’ll no longer be the most ambitious person you run into in meatspace”. I’m empirically not someone who needs to be surrounded by like minds in order to thrive, but plausibly like minds could still amplify me. (Separately, I think it’s important for everyone who can afford to not live in the bay to avoid living in the bay, because brain drain and complete absence of cool projects in non-bay cities seem really bad! But I understand that some people simply can’t be ambitious if they’re not getting social rewards for it)
I guess I wonder how best to cultivate ass-kicking, through the kind of automatic cultivation and habituation that comes built in to relationships.
I think 15-20% decrease in ambition is a reasonable price to pay for being a parent. I don’t know if that price is really exacted.
Borlaug was a super absentee parent, his wife did everything herself and he (presumably) sent back cash while globetrotting. How many of these ambitious people with kids aren’t super involved in their kids’ lives?
does “you are what you can’t stop yourself from doing” help you in this time? Querying your revealed preferences for behavior that is beyond effortless, that it would take effort to not do, can be very informative.
Yesterday I quit my job for direct work on epistemic public goods! Day one of direct work trial offer is April 4th, and it’ll take 6 weeks after that to know if I’m a fulltime hire.
I’m turning down
raise to 200k/yr usd
building lots of skills and career capital that would give me immense job security in worlds where investment into one particular blockchain doesn’t go entirely to zero
having fun on the technical challenges
confluence of my skillset and a theory of change that could pay huge dividends in the epistemic public goods space
0.35x paycut from my upcoming raise
uncertainty of it being a trial offer.
Which I’m flagging in such detail to give you strength if you’re ever reasoning about your risk tolerance and your goals, just remember, “look at what quinn did!”
yeah the bet pressured me to post it a little early.
I’d be interested in elaboration of your view of comparative advantage shifting. You mean shifting more toward lucrative E2G opportunities? Shifting more away from capacity to make lucrative alignment contributions?
Do you have any recommendations for what would make it less rambly?
Would there be a way of estimating how many people within the amazon organization are fanatical about same day delivery ratio against how many are “just working a job”? Does anyone have a guess? My guess is that an organization of that size with a lot of cash only needs about 50 true fanatics, the rest can be “mere employees”. What do yall think?