Does that, in turn, mean that it’s probably a good investment to buy souls for 10 bucks a pop (or even more)?
Ozyrus
Sam Altman, Greg Brockman and others from OpenAI join Microsoft
NVIDIA and Microsoft releases 530B parameter transformer model, Megatron-Turing NLG
ICA Simulacra
Beijing Academy of Artificial Intelligence announces 1,75 trillion parameters model, Wu Dao 2.0
[Question] Do alignment concerns extend to powerful non-AI agents?
GPT-4 implicitly values identity preservation: a study of LMCA identity management
Alignment of AutoGPT agents
[Question] What would be the signs of AI manhattan projects starting? Should a website be made watching for these signs?
Stability AI releases StableLM, an open-source ChatGPT counterpart
[Question] Modeling AI milestones to adjust AGI arrival estimates?
[Question] Memetic hazards of AGI architecture posts
Well, this is a stupid questions thread after all, so I might as well ask one that seems really stupid.
How can a person who promotes rationality have excess weight? Been bugging me for a while. Isn’t it kinda the first thing you would want to apply your rationality to? If you have things to do that get you more utility, you can always pay diet specialist and just stick to the diet, because it seems to me that additional years to life will bring you more utility than any other activity you could spend that money on.
Google announces Pathways: new generation multitask AI Architecture
[Question] In search for plausible scenarios of AI takeover, or the Takeover Argument
Very nice post, thank you!
I think that it’s possible to achieve with the current LLM paradigm, although it does require more (probably much more) effort on aligning the thing that will possibly get to being superhuman first, which is an LLM wrapped in in some cognitive architecture (also see this post).
That means that LLM must be implicitly trained in an aligned way, and the LMCA must be explicitly designed in such a way as to allow for reflection and robust value preservation, even if LMCA is able to edit explicitly stated goals (I described it in a bit more detail in this post).
Is there a comprehensive list of AI Safety orgs/personas and what exactly they do? Is there one for capabilities orgs with their stance on safety?
I think I saw something like that, but can’t find it.
Are there any lesswrong-like sequences focused on economics, finance, business, management? Or maybe just internet communities like lesswrong focused on these subjects?
I mean, the sequences introduced me to some really complex knowledge that improved me a lot, while simultaneously being engaging and quite easy to read. It is only logical to assume that somewhere on the web, there must be some articles in the same style covering different themes. And if there are not, well, someone must surely do this, I think there is some demand for this kind of content.
So, feel free to link lesswrong-like series of blogposts on any theme, actually: that will be really helpful for me. P.S. In hindsight, i guess there may be some post here, on lesswrong, containing all these links I am looking for. If so, could anyone link me to it?
Great post! Was very insightful, since I’m currently working on evaluation of Identity management, strong upvoted.
This seems focused on evaluating LLMs; what do you think about working with LLM cognitive architectures (LMCA), wrappers like auto-gpt, langchain, etc?
I’m currently operating under assumption that this is a way we can get AGI “early”, so I’m focusing on researching ways to align LMCA, which seems a bit different from aligning LLMs in general.
Would be great to talk about LMCA evals :)
Thanks,.That means a lot. Focusing on getting out right now.