Currently experimenting with throwing more of my random ideas out there on quick takes.
Maloew
Law of the Non-Player-Character:
Law of one player: Any specific thing you just thought of will never happen[1] unless you (yes, you specifically) make it happen.
I do not have the skills/motivation/resources to make that thing happen.
Thus, it will not happen.
If you have an interesting idea or project that you probably won’t do, write it somewhere! It still might not happen, but it’s better than if you kept it a secret.
Paraxanthine-based stimulants look to me like a pretty darn low-hanging fruit that took forever to be picked; science has known about caffeine metabolism and paraxanthine’s adenosine receptor antagonism since at least the early 1980s, yet the paraxanthine supplements only became available a few years ago.[9]
Huh, this seems possibly really important. If there are more insights of a similar or greater level already hidden in the existing literature, it seems worthwhile to put a bunch of effort into finding them. I wonder if existing LLM harnesses would be enough if told to look for the right things, or if we would need a new one to effectively sort through and find them?
Has anyone tried to make a sort of standardized test for collaborative rationality/coordination skills (double-crux, ITT, things like in this post)? It seems to me like that + a sort of badges system on a browser extension[1] solves problems where both people would coordinate and use a better strategy if they knew the other person was able to.
[1] allows for use on more than just this site, also I don’t know how I would feel about this being an Official LW Thing
Hmm, I think I misphrased my OP, I agree that a lot of what we write about wouldn’t produce those sorts of immediate changes. I meant more that this is a rather profitable business model that I expect some of us to be very well-suited to, which also happens to work towards one of the core goals of our community. I also separately expect that we spend far less time than we should finding ways to bet on the seemingly ambiguous things, but I also don’t think that working out is necessary for a profitable LW investigative journal. It seems plausible that the people with the necessary skillsets are currently busy with more valuable things, but I’m a little suspicious of that argument because of the aforementioned lack of funding.
Maloew’s Shortform
LessWrong appears to sit at the Pareto-frontier of truthseeking, calibration, and clear writing about facts we’ve discovered. This is a mix of investigative journalist and quant traits. If you attach a hedge fund to an investigative journal, it becomes profitable to write based on how accurate the information is relative to the current consensus. We seem fairly funding constrained as a community, and this seems like an opportunity to profitably raise the sanity waterline. AFAIK rationalists haven’t done this, why? FTX?
[Question] How Important is Inverting LLMs?
[Question] Could China Unilaterally Cause an AI Pause?
I wonder if partnering with high-impact charities would make the circuit more profitable? Somewhere in the book, a wealthy banker notes he feels bad about spending a bunch at one of these parties. If ~half of the money went to GiveWell’s top charities, then they could feel good! They’re saving lives at a significantly higher rate than just donating to a random charity! The club gets more consumption (hopefully), a tax writeoff, and could add a leaderboard to get more competition/number go up (eg. X person: 10 lives saved).
Are the new songs going to be posted to youtube/spotify, or should I be downloading them?
Very late followup question: how much additional effort do you think would be neccessary to try to do this for something like stamina/energy? It seems like some people (eg. Eliezer) are bottlenecked more on that than intelligence, and just in general alignment researchers having more energy than their capabilities counterparts seems very useful.
Instead, the U.S. government will do what it has done every time it’s been convinced of the importance of a powerful new technology in the past hundred years: it will drive research and development for military purposes.
I wonder if there is an actual path to alignment-pilling the US government by framing it as a race to solve alignment? That would get them to make military projects focused on aligning AI as quickly as possible, rather than building a hostile god. It also seems like a fairly defensible position politically, with everything being a struggle between powers to get aligned AI first, counting misaligned AI as one of the powers.
Something like: “Whoever solves alignment first wins the future of the galaxy, therefore we need to race to solve alignment. Capabilities don’t help unless they’re aligned, and move us closer to a hostile power (the AI) solving alignment and wiping us out.”
Why I’m Pouring Cold Water in My Left Ear, and You Should Too
It seems plausible to me that some portion of iq-enhancing genes work through pathways outside the brain (blood flow, faster metabolism, nutrient delivery, stimulant-like effects, etc.). If that is the case and even just a small portion of the edits don’t need to make it to the brain, couldn’t you get huge iq increases without ever crossing the blood-brain barrier?
If timelines are short, it seems worthwhile to do that first and use the gains to bootstrap from there. Would that give significant returns fast enough to be worth doing? Is this something you’re already trying to do?
The “[1]” in the original links to their short form, but I was not aware I could actually link to a specific post from there, thanks! Edited.