LessWrong team member / moderator. I’ve been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I’ve been interested in improving my own epistemic standards and helping others to do so as well.
Raemon(Raymond Arnold)
Curated.
I’ve spent the past few weeks independently interested in this concept (before mesaoptimizer posted it, actually). I reread the Eliezer tweet while investigating “deliberate practice for solving Confusing Problems™”.
I still have a lot of open questions on “how do you actually do this effectively?” and “how long does it take to pay off in ‘you actually think faster?‘”. But I’ve at least transitioned from “I feel like there’s no way I could have ‘thought it faster’” to “I observe specific earlier moments where I failed to notice clues that could have pointed me at the right solution” and “I’ve identified skills I could have had that would have made it possible to identify and act on those clues.”
I’ve personally gotten mileage from writing out in detail what my thought process was, and then writing out in detail “what’s the shortest way I could imagine a superintelligence or someone 40 IQ points higher than me would have reliably done it?”. The process currently takes me ~30 minutes.
A thing I haven’t attempted yet is:
Eliezer Yudkowsky: See, if I’d noticed myself doing anything remotely like that, I’d go back, figure out which steps of thought were actually performing intrinsically necessary cognitive work, and then retrain myself to perform only those steps over the course of 30 seconds.
I’m interested in other people trying this and seeing if useful stuff falls out.
We do not currently have sheet music for most songs. It’s also extra labor to arrange the slides (though this isn’t that big a part of the problem)
This plus “also it’s a lot more work to setup” are my own main cruxes. (If either were false I’d consider it much more strongly).
how long is Brogue?
Yeah I do not super stand by how I phrased it in the post. But also your second paragraph feels wrong to me too – in some sense yes Chess and Slay the Spire hidden information are “the same”, but, like, it seems at least somewhat important that in Slay the Spire there are things you can’t predict by purely running simulations forward, you have to have a probability distribution over pretty unknown things.
(I’m not sure I’ll stand by either this or my last comment, either. I’m thinking out loud, and may have phrased things wrong here)
(Though there might be actions a first-time player can take to help pin down the rules of the game, that an experienced player would already know; I’m unclear on whether that counts for purposes of this exercise.)
I think one thing I meant in the OP was more about “the player can choose to spend more time modeling the situation.” Is it worth spending an extra 15 minutes thinking about how the longterm game might play out, and what concerns you may run into that you aren’t currently modeling? I dunno! Depends on how much better you become at playing the game, by spending those 15 minutes.
This is maybe a nonstandard use of “value of information”, but I think it counts.
Seems big if true and fairly plausible. I’d be interested in chipping in to pay for someone to come up with a methodology for investing this more and then running at it if the methodology seemed good.
(also it’s occurring to me it’d be cool to have a “Dollars!/Unit of Caring” react)
I’m not mesaoptimizer, but, fyi my case is “I totally didn’t find IFS type stuff very useful for years, and the one day I just suddenly needed it, or at least found myself shaped very differently such that it felt promising.” (see My “2.9 trauma limit”)
My general plan is to mix “work on your real goals” (which takes months to find out if you were on the right track) and “work on faster paced things that convey whether you’ve gained some kind of useful skill you didn’t have before”.
My goal right now is to find (toy, concrete) exercises that somehow reflect the real world complexity of making longterm plans, aiming to achieve unclear goals in a confusing world.
Things that seem important to include in the exercise:
“figuring out what the goal actually is”
“you have lots of background knowledge and ideas of where to look next, but the explosion of places you could possibly look is kinda overwhelming”
managing various resources along the way, but it’s not obvious what those resources are.
you get data from the world (but, not necessarily the most important data)
it’s not obvious how long to spend gathering information, or refining your plan
it’s not obvious whether your current strategy is anywhere close to the best one
The exercise should be short (ideally like a couple hours but maybe a day or a hypothetically a week), but, somehow metaphorically reflects all those things.
Previously I asked about strategy/resource management games you could try to beat on your first try. One thing I bump into is that often the initial turns are fairly constrained in your choices, only later does it get complex (which is maybe fine, but, for my real world plans, the nigh-infinite possibilities seem like the immediate problem?)
why is it bad to lose/regain?
Lots of people have mentioned various flavors of roguelikes. One of my goals is to have games in different genres. I agree that roguelikes are often a good source of the qualities I’m looking for here but part of the point is to try applying the same skills on radically different setups.
Another thing I’m interested in is “ease of setup”, where you can download the game, open it up, and immediately be in the experience instead of having to do a bunch of steps to get there.
Say more?
too acronymed for me :(
One-shot strategy games?
I was going off a vague sense from having talked to a few people who had scanned the literature more than I.
Right now I’m commissioning a lit review about “transfer learning”, “meta learning”, and things similar to that. My sense so far is that there aren’t a lot of super impressive results, but part of that looks like it’s because it’s hard to teach people relevant stuff in a “laboratory”-esque setting.
My Anthropic take, which is sort of replying to this thread between @aysja and @LawrenceC but felt enough of a new topic to just put here.
It seems overwhelmingly the case that Anthropic is trying to thread some kind of line between “seeming like a real profitable AI company that is worth investing in”, and “at the very least paying lip service to, and maybe, actually taking really seriously, x-risk.”
(This all goes for OpenAI too. OpenAI seems much worse on these dimensions to me right now. Anthropic feels more like it has the potential to actually be a good/safe org in a way that OpenAI feels beyond hope atm, so I’m picking on Anthropic)
For me, the open, interesting questions are:
Does Dario-and-other-leadership have good models of x-risk, and mitigation methods thereof?
How is the AI Safety community supposed to engage with an org that is operating in epistemically murky territory?
Like, it seems like Anthropic is trying to market itself to investors and consumers as “our products are powerful (and safe)”, and trying to market itself to AI Safety folk as “we’re being responsible as we develop along the frontier.” These are naturally in tension.
I think it’s plausible (although I am suspicious) that Anthropic’s strategy is actually good. i.e. maybe you really do need to iterate on frontier AI to do meaningful safety work, maybe you do need to stay on the frontier because the world is accelerating whether Anthropic wants it to or not. Maybe pausing now is bad. Maybe this all means you need a lot of money, which means you need investors an consumers to believe your product is good.
But, like, for the AI safety community to be epistemically healthy, we need to have some way of engaging with this question.
I would like to live in a world where it’s straightforwardly good to always spell out true things loudly/clearly. I’m not sure I have the luxury of living in that world. I think I need to actually engage with the possibility that it’s necessary for Anthropic to murkily say one thing to investors and another thing to AI safety peeps. But, I do not think Anthropic has earned my benefit of the doubt here.
But, the way I wish the conversation was playing out was less like “did Anthropic say a particular misleading thing?” and more like “how should EA/x-risk/safety folk comport themselves, such that they don’t have to trust Anthropic? And how should Anthropic comport itself, such that it doesn’t have to be running on trust, when it absorbs talent and money from the EA landscape?”
I feel some kinda missing mood in these comments. It seems like you’re saying “Anthropic didn’t make explicit commitments here”, and that you’re not weighting as particularly important whether they gave people different impressions, or benefited from that.
(AFAICT you haven’t explicitly stated “that’s not a big deal”, but, it’s the vibe I get from your comments. Is that something you’re intentionally implying, or do you think of yourself as mostly just trying to be clear on the factual claims, or something like that?)
I think something going on here is the hypothetical “you actually have to pick one of these two” is pretty weird, normally you have the option to walk away. If I find myself in such a hypothetical it seems more likely “well, somehow I’m gonna have to make use of these coupons” in a way that doesn’t seem normally true.
It depends a lot on the musician and their skillset.
For me: I don’t really speak fluent sheet music. When I write music, I do it entirely by ear. I record it. I have musicians listen to the record and imitate it by ear. Later on, if I want sheet music, I hire someone to listen to the record and transcribe it into sheet music after-the-fact, a process which costs like $200 per song (or, free, if I do it myself or get a volunteer, but it’s a couple hours per song and there are like 30 songs so this is not a quick/easy volunteer process)
Some musicians “think primarily in sheet music”, and then they would do it with sheet music from the get-go as part of the creation process. Some solstice songs already have sheet music for this reason.
I’ve paid money to transcribe ~3-5 solstice songs with sheet music so far.